Nov 26 12:08:19 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 26 12:08:19 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 26 12:08:19 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 26 12:08:19 localhost kernel: BIOS-provided physical RAM map:
Nov 26 12:08:19 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 26 12:08:19 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 26 12:08:19 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 26 12:08:19 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Nov 26 12:08:19 localhost kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Nov 26 12:08:19 localhost kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Nov 26 12:08:19 localhost kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Nov 26 12:08:19 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 26 12:08:19 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 26 12:08:19 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Nov 26 12:08:19 localhost kernel: NX (Execute Disable) protection: active
Nov 26 12:08:19 localhost kernel: APIC: Static calls initialized
Nov 26 12:08:19 localhost kernel: SMBIOS 2.8 present.
Nov 26 12:08:19 localhost kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Nov 26 12:08:19 localhost kernel: Hypervisor detected: KVM
Nov 26 12:08:19 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 26 12:08:19 localhost kernel: kvm-clock: using sched offset of 3844938102 cycles
Nov 26 12:08:19 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 26 12:08:19 localhost kernel: tsc: Detected 2445.406 MHz processor
Nov 26 12:08:19 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 26 12:08:19 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 26 12:08:19 localhost kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Nov 26 12:08:19 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 26 12:08:19 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 26 12:08:19 localhost kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Nov 26 12:08:19 localhost kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Nov 26 12:08:19 localhost kernel: Using GB pages for direct mapping
Nov 26 12:08:19 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 26 12:08:19 localhost kernel: ACPI: Early table checksum verification disabled
Nov 26 12:08:19 localhost kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Nov 26 12:08:19 localhost kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 12:08:19 localhost kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 12:08:19 localhost kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 12:08:19 localhost kernel: ACPI: FACS 0x000000007FFDFC80 000040
Nov 26 12:08:19 localhost kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 12:08:19 localhost kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 12:08:19 localhost kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 12:08:19 localhost kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Nov 26 12:08:19 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Nov 26 12:08:19 localhost kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Nov 26 12:08:19 localhost kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Nov 26 12:08:19 localhost kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Nov 26 12:08:19 localhost kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Nov 26 12:08:19 localhost kernel: No NUMA configuration found
Nov 26 12:08:19 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Nov 26 12:08:19 localhost kernel: NODE_DATA(0) allocated [mem 0x27ffd3000-0x27fffdfff]
Nov 26 12:08:19 localhost kernel: crashkernel reserved: 0x000000006f000000 - 0x000000007f000000 (256 MB)
Nov 26 12:08:19 localhost kernel: Zone ranges:
Nov 26 12:08:19 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 26 12:08:19 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 26 12:08:19 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000027fffffff]
Nov 26 12:08:19 localhost kernel:   Device   empty
Nov 26 12:08:19 localhost kernel: Movable zone start for each node
Nov 26 12:08:19 localhost kernel: Early memory node ranges
Nov 26 12:08:19 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 26 12:08:19 localhost kernel:   node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Nov 26 12:08:19 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000027fffffff]
Nov 26 12:08:19 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Nov 26 12:08:19 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 26 12:08:19 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 26 12:08:19 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 26 12:08:19 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 26 12:08:19 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 26 12:08:19 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 26 12:08:19 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 26 12:08:19 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 26 12:08:19 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 26 12:08:19 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 26 12:08:19 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 26 12:08:19 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 26 12:08:19 localhost kernel: TSC deadline timer available
Nov 26 12:08:19 localhost kernel: CPU topo: Max. logical packages:   4
Nov 26 12:08:19 localhost kernel: CPU topo: Max. logical dies:       4
Nov 26 12:08:19 localhost kernel: CPU topo: Max. dies per package:   1
Nov 26 12:08:19 localhost kernel: CPU topo: Max. threads per core:   1
Nov 26 12:08:19 localhost kernel: CPU topo: Num. cores per package:     1
Nov 26 12:08:19 localhost kernel: CPU topo: Num. threads per package:   1
Nov 26 12:08:19 localhost kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Nov 26 12:08:19 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 26 12:08:19 localhost kernel: kvm-guest: KVM setup pv remote TLB flush
Nov 26 12:08:19 localhost kernel: kvm-guest: setup PV sched yield
Nov 26 12:08:19 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 26 12:08:19 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 26 12:08:19 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 26 12:08:19 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 26 12:08:19 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Nov 26 12:08:19 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Nov 26 12:08:19 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Nov 26 12:08:19 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Nov 26 12:08:19 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Nov 26 12:08:19 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Nov 26 12:08:19 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 26 12:08:19 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 26 12:08:19 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 26 12:08:19 localhost kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Nov 26 12:08:19 localhost kernel: Booting paravirtualized kernel on KVM
Nov 26 12:08:19 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 26 12:08:19 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Nov 26 12:08:19 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Nov 26 12:08:19 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u524288 alloc=1*2097152
Nov 26 12:08:19 localhost kernel: pcpu-alloc: [0] 0 1 2 3 
Nov 26 12:08:19 localhost kernel: kvm-guest: PV spinlocks enabled
Nov 26 12:08:19 localhost kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Nov 26 12:08:19 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 26 12:08:19 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 26 12:08:19 localhost kernel: random: crng init done
Nov 26 12:08:19 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 26 12:08:19 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 26 12:08:19 localhost kernel: Fallback order for Node 0: 0 
Nov 26 12:08:19 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 26 12:08:19 localhost kernel: Policy zone: Normal
Nov 26 12:08:19 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 26 12:08:19 localhost kernel: software IO TLB: area num 4.
Nov 26 12:08:19 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Nov 26 12:08:19 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Nov 26 12:08:19 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 26 12:08:19 localhost kernel: Dynamic Preempt: voluntary
Nov 26 12:08:19 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 26 12:08:19 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 26 12:08:19 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Nov 26 12:08:19 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 26 12:08:19 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 26 12:08:19 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 26 12:08:19 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 26 12:08:19 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Nov 26 12:08:19 localhost kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 26 12:08:19 localhost kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 26 12:08:19 localhost kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 26 12:08:19 localhost kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Nov 26 12:08:19 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 26 12:08:19 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 26 12:08:19 localhost kernel: Console: colour VGA+ 80x25
Nov 26 12:08:19 localhost kernel: printk: console [ttyS0] enabled
Nov 26 12:08:19 localhost kernel: ACPI: Core revision 20230331
Nov 26 12:08:19 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 26 12:08:19 localhost kernel: x2apic enabled
Nov 26 12:08:19 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 26 12:08:19 localhost kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Nov 26 12:08:19 localhost kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Nov 26 12:08:19 localhost kernel: kvm-guest: setup PV IPIs
Nov 26 12:08:19 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 26 12:08:19 localhost kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406)
Nov 26 12:08:19 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 26 12:08:19 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 26 12:08:19 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 26 12:08:19 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 26 12:08:19 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 26 12:08:19 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 26 12:08:19 localhost kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Nov 26 12:08:19 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 26 12:08:19 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 26 12:08:19 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 26 12:08:19 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 26 12:08:19 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 26 12:08:19 localhost kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Nov 26 12:08:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 26 12:08:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 26 12:08:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 26 12:08:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Nov 26 12:08:19 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 26 12:08:19 localhost kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Nov 26 12:08:19 localhost kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Nov 26 12:08:19 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 26 12:08:19 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 26 12:08:19 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 26 12:08:19 localhost kernel: landlock: Up and running.
Nov 26 12:08:19 localhost kernel: Yama: becoming mindful.
Nov 26 12:08:19 localhost kernel: SELinux:  Initializing.
Nov 26 12:08:19 localhost kernel: LSM support for eBPF active
Nov 26 12:08:19 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 26 12:08:19 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 26 12:08:19 localhost kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Nov 26 12:08:19 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 26 12:08:19 localhost kernel: ... version:                0
Nov 26 12:08:19 localhost kernel: ... bit width:              48
Nov 26 12:08:19 localhost kernel: ... generic registers:      6
Nov 26 12:08:19 localhost kernel: ... value mask:             0000ffffffffffff
Nov 26 12:08:19 localhost kernel: ... max period:             00007fffffffffff
Nov 26 12:08:19 localhost kernel: ... fixed-purpose events:   0
Nov 26 12:08:19 localhost kernel: ... event mask:             000000000000003f
Nov 26 12:08:19 localhost kernel: signal: max sigframe size: 3376
Nov 26 12:08:19 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 26 12:08:19 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 26 12:08:19 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 26 12:08:19 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 26 12:08:19 localhost kernel: .... node  #0, CPUs:      #1 #2 #3
Nov 26 12:08:19 localhost kernel: smp: Brought up 1 node, 4 CPUs
Nov 26 12:08:19 localhost kernel: smpboot: Total of 4 processors activated (19563.24 BogoMIPS)
Nov 26 12:08:19 localhost kernel: node 0 deferred pages initialised in 8ms
Nov 26 12:08:19 localhost kernel: Memory: 7768176K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 615228K reserved, 0K cma-reserved)
Nov 26 12:08:19 localhost kernel: devtmpfs: initialized
Nov 26 12:08:19 localhost kernel: x86/mm: Memory block size: 128MB
Nov 26 12:08:19 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 26 12:08:19 localhost kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Nov 26 12:08:19 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 26 12:08:19 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 26 12:08:19 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 26 12:08:19 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 26 12:08:19 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 26 12:08:19 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 26 12:08:19 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 26 12:08:19 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 26 12:08:19 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 26 12:08:19 localhost kernel: audit: type=2000 audit(1764158898.754:1): state=initialized audit_enabled=0 res=1
Nov 26 12:08:19 localhost kernel: cpuidle: using governor menu
Nov 26 12:08:19 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 26 12:08:19 localhost kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Nov 26 12:08:19 localhost kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Nov 26 12:08:19 localhost kernel: PCI: Using configuration type 1 for base access
Nov 26 12:08:19 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 26 12:08:19 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 26 12:08:19 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 26 12:08:19 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 26 12:08:19 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 26 12:08:19 localhost kernel: Demotion targets for Node 0: null
Nov 26 12:08:19 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 26 12:08:19 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 26 12:08:19 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 26 12:08:19 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 26 12:08:19 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 26 12:08:19 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 26 12:08:19 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 26 12:08:19 localhost kernel: ACPI: Interpreter enabled
Nov 26 12:08:19 localhost kernel: ACPI: PM: (supports S0 S5)
Nov 26 12:08:19 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 26 12:08:19 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 26 12:08:19 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 26 12:08:19 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Nov 26 12:08:19 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 26 12:08:19 localhost kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 26 12:08:19 localhost kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Nov 26 12:08:19 localhost kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Nov 26 12:08:19 localhost kernel: PCI host bridge to bus 0000:00
Nov 26 12:08:19 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Nov 26 12:08:19 localhost kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 26 12:08:19 localhost kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 26 12:08:19 localhost kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 26 12:08:19 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Nov 26 12:08:19 localhost kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Nov 26 12:08:19 localhost kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Nov 26 12:08:19 localhost kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Nov 26 12:08:19 localhost kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Nov 26 12:08:19 localhost kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Nov 26 12:08:19 localhost kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Nov 26 12:08:19 localhost kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Nov 26 12:08:19 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 26 12:08:19 localhost kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Nov 26 12:08:19 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:02: extended config space not accessible
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [1] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [2] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [3] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [4] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [5] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [6] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [7] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [8] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [9] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [10] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [11] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [12] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [13] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [14] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [15] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [16] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [17] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [18] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [19] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [20] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [21] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [22] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [23] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [24] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [25] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [26] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [27] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [28] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [29] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [30] registered
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [31] registered
Nov 26 12:08:19 localhost kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 26 12:08:19 localhost kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Nov 26 12:08:19 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-2] registered
Nov 26 12:08:19 localhost kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Nov 26 12:08:19 localhost kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Nov 26 12:08:19 localhost kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-3] registered
Nov 26 12:08:19 localhost kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Nov 26 12:08:19 localhost kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Nov 26 12:08:19 localhost kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-4] registered
Nov 26 12:08:19 localhost kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Nov 26 12:08:19 localhost kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-5] registered
Nov 26 12:08:19 localhost kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Nov 26 12:08:19 localhost kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-6] registered
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-7] registered
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-8] registered
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-9] registered
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-10] registered
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-11] registered
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-12] registered
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-13] registered
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-14] registered
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-15] registered
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-16] registered
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 26 12:08:19 localhost kernel: acpiphp: Slot [0-17] registered
Nov 26 12:08:19 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Nov 26 12:08:19 localhost kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Nov 26 12:08:19 localhost kernel: iommu: Default domain type: Translated
Nov 26 12:08:19 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 26 12:08:19 localhost kernel: SCSI subsystem initialized
Nov 26 12:08:19 localhost kernel: ACPI: bus type USB registered
Nov 26 12:08:19 localhost kernel: usbcore: registered new interface driver usbfs
Nov 26 12:08:19 localhost kernel: usbcore: registered new interface driver hub
Nov 26 12:08:19 localhost kernel: usbcore: registered new device driver usb
Nov 26 12:08:19 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 26 12:08:19 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 26 12:08:19 localhost kernel: PTP clock support registered
Nov 26 12:08:19 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 26 12:08:19 localhost kernel: NetLabel: Initializing
Nov 26 12:08:19 localhost kernel: NetLabel:  domain hash size = 128
Nov 26 12:08:19 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 26 12:08:19 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 26 12:08:19 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 26 12:08:19 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 26 12:08:19 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 26 12:08:19 localhost kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Nov 26 12:08:19 localhost kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Nov 26 12:08:19 localhost kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 26 12:08:19 localhost kernel: vgaarb: loaded
Nov 26 12:08:19 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 26 12:08:19 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 26 12:08:19 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 26 12:08:19 localhost kernel: pnp: PnP ACPI init
Nov 26 12:08:19 localhost kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Nov 26 12:08:19 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 26 12:08:19 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 26 12:08:19 localhost kernel: NET: Registered PF_INET protocol family
Nov 26 12:08:19 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 26 12:08:19 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 26 12:08:19 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 26 12:08:19 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 26 12:08:19 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 26 12:08:19 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 26 12:08:19 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 26 12:08:19 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 26 12:08:19 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 26 12:08:19 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 26 12:08:19 localhost kernel: NET: Registered PF_XDP protocol family
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Nov 26 12:08:19 localhost kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 12:08:19 localhost kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Nov 26 12:08:19 localhost kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Nov 26 12:08:19 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 26 12:08:19 localhost kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Nov 26 12:08:19 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 26 12:08:19 localhost kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Nov 26 12:08:19 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Nov 26 12:08:19 localhost kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 26 12:08:19 localhost kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Nov 26 12:08:19 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 26 12:08:19 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 26 12:08:19 localhost kernel: software IO TLB: mapped [mem 0x000000006b000000-0x000000006f000000] (64MB)
Nov 26 12:08:19 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 26 12:08:19 localhost kernel: ACPI: bus type thunderbolt registered
Nov 26 12:08:19 localhost kernel: Initialise system trusted keyrings
Nov 26 12:08:19 localhost kernel: Key type blacklist registered
Nov 26 12:08:19 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 26 12:08:19 localhost kernel: zbud: loaded
Nov 26 12:08:19 localhost kernel: integrity: Platform Keyring initialized
Nov 26 12:08:19 localhost kernel: integrity: Machine keyring initialized
Nov 26 12:08:19 localhost kernel: Freeing initrd memory: 85868K
Nov 26 12:08:19 localhost kernel: NET: Registered PF_ALG protocol family
Nov 26 12:08:19 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 26 12:08:19 localhost kernel: Key type asymmetric registered
Nov 26 12:08:19 localhost kernel: Asymmetric key parser 'x509' registered
Nov 26 12:08:19 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 26 12:08:19 localhost kernel: io scheduler mq-deadline registered
Nov 26 12:08:19 localhost kernel: io scheduler kyber registered
Nov 26 12:08:19 localhost kernel: io scheduler bfq registered
Nov 26 12:08:19 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Nov 26 12:08:19 localhost kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Nov 26 12:08:19 localhost kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Nov 26 12:08:19 localhost kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Nov 26 12:08:19 localhost kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Nov 26 12:08:19 localhost kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Nov 26 12:08:19 localhost kernel: shpchp 0000:01:00.0: Slot initialization failed
Nov 26 12:08:19 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 26 12:08:19 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 26 12:08:19 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 26 12:08:19 localhost kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Nov 26 12:08:19 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 26 12:08:19 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 26 12:08:19 localhost kernel: Non-volatile memory driver v1.3
Nov 26 12:08:19 localhost kernel: rdac: device handler registered
Nov 26 12:08:19 localhost kernel: hp_sw: device handler registered
Nov 26 12:08:19 localhost kernel: emc: device handler registered
Nov 26 12:08:19 localhost kernel: alua: device handler registered
Nov 26 12:08:19 localhost kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Nov 26 12:08:19 localhost kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Nov 26 12:08:19 localhost kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Nov 26 12:08:19 localhost kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Nov 26 12:08:19 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 26 12:08:19 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 26 12:08:19 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 26 12:08:19 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 26 12:08:19 localhost kernel: usb usb1: SerialNumber: 0000:02:01.0
Nov 26 12:08:19 localhost kernel: hub 1-0:1.0: USB hub found
Nov 26 12:08:19 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 26 12:08:19 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 26 12:08:19 localhost kernel: usbserial: USB Serial support registered for generic
Nov 26 12:08:19 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 26 12:08:19 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 26 12:08:19 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 26 12:08:19 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 26 12:08:19 localhost kernel: rtc_cmos 00:03: RTC can wake from S4
Nov 26 12:08:19 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 26 12:08:19 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 26 12:08:19 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 26 12:08:19 localhost kernel: rtc_cmos 00:03: registered as rtc0
Nov 26 12:08:19 localhost kernel: rtc_cmos 00:03: setting system clock to 2025-11-26T12:08:19 UTC (1764158899)
Nov 26 12:08:19 localhost kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Nov 26 12:08:19 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 26 12:08:19 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 26 12:08:19 localhost kernel: usbcore: registered new interface driver usbhid
Nov 26 12:08:19 localhost kernel: usbhid: USB HID core driver
Nov 26 12:08:19 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 26 12:08:19 localhost kernel: Initializing XFRM netlink socket
Nov 26 12:08:19 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 26 12:08:19 localhost kernel: Segment Routing with IPv6
Nov 26 12:08:19 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 26 12:08:19 localhost kernel: mpls_gso: MPLS GSO support
Nov 26 12:08:19 localhost kernel: IPI shorthand broadcast: enabled
Nov 26 12:08:19 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 26 12:08:19 localhost kernel: AES CTR mode by8 optimization enabled
Nov 26 12:08:19 localhost kernel: sched_clock: Marking stable (1140001870, 142016847)->(1348772546, -66753829)
Nov 26 12:08:19 localhost kernel: registered taskstats version 1
Nov 26 12:08:19 localhost kernel: Loading compiled-in X.509 certificates
Nov 26 12:08:19 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 26 12:08:19 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 26 12:08:19 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 26 12:08:19 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 26 12:08:19 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 26 12:08:19 localhost kernel: Demotion targets for Node 0: null
Nov 26 12:08:19 localhost kernel: page_owner is disabled
Nov 26 12:08:19 localhost kernel: Key type .fscrypt registered
Nov 26 12:08:19 localhost kernel: Key type fscrypt-provisioning registered
Nov 26 12:08:19 localhost kernel: Key type big_key registered
Nov 26 12:08:19 localhost kernel: Key type encrypted registered
Nov 26 12:08:19 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 26 12:08:19 localhost kernel: Loading compiled-in module X.509 certificates
Nov 26 12:08:19 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 26 12:08:19 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 26 12:08:19 localhost kernel: ima: No architecture policies found
Nov 26 12:08:19 localhost kernel: evm: Initialising EVM extended attributes:
Nov 26 12:08:19 localhost kernel: evm: security.selinux
Nov 26 12:08:19 localhost kernel: evm: security.SMACK64 (disabled)
Nov 26 12:08:19 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 26 12:08:19 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 26 12:08:19 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 26 12:08:19 localhost kernel: evm: security.apparmor (disabled)
Nov 26 12:08:19 localhost kernel: evm: security.ima
Nov 26 12:08:19 localhost kernel: evm: security.capability
Nov 26 12:08:19 localhost kernel: evm: HMAC attrs: 0x1
Nov 26 12:08:19 localhost kernel: Running certificate verification RSA selftest
Nov 26 12:08:19 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 26 12:08:19 localhost kernel: Running certificate verification ECDSA selftest
Nov 26 12:08:19 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 26 12:08:19 localhost kernel: clk: Disabling unused clocks
Nov 26 12:08:19 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 26 12:08:19 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 26 12:08:19 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 26 12:08:19 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 26 12:08:19 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 26 12:08:19 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 26 12:08:19 localhost kernel: Run /init as init process
Nov 26 12:08:19 localhost kernel:   with arguments:
Nov 26 12:08:19 localhost kernel:     /init
Nov 26 12:08:19 localhost kernel:   with environment:
Nov 26 12:08:19 localhost kernel:     HOME=/
Nov 26 12:08:19 localhost kernel:     TERM=linux
Nov 26 12:08:19 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Nov 26 12:08:19 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 26 12:08:19 localhost systemd[1]: Detected virtualization kvm.
Nov 26 12:08:19 localhost systemd[1]: Detected architecture x86-64.
Nov 26 12:08:19 localhost systemd[1]: Running in initrd.
Nov 26 12:08:19 localhost systemd[1]: No hostname configured, using default hostname.
Nov 26 12:08:19 localhost systemd[1]: Hostname set to <localhost>.
Nov 26 12:08:19 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 26 12:08:19 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 26 12:08:19 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 26 12:08:19 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 26 12:08:19 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 26 12:08:19 localhost systemd[1]: Reached target Local File Systems.
Nov 26 12:08:19 localhost systemd[1]: Reached target Path Units.
Nov 26 12:08:19 localhost systemd[1]: Reached target Slice Units.
Nov 26 12:08:19 localhost systemd[1]: Reached target Swaps.
Nov 26 12:08:19 localhost systemd[1]: Reached target Timer Units.
Nov 26 12:08:19 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 26 12:08:19 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 26 12:08:19 localhost systemd[1]: Listening on Journal Socket.
Nov 26 12:08:19 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 26 12:08:19 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 26 12:08:19 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 26 12:08:19 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 26 12:08:19 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Nov 26 12:08:19 localhost systemd[1]: Listening on udev Control Socket.
Nov 26 12:08:19 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 26 12:08:19 localhost systemd[1]: Reached target Socket Units.
Nov 26 12:08:19 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 26 12:08:19 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Nov 26 12:08:19 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 26 12:08:19 localhost systemd[1]: Starting Journal Service...
Nov 26 12:08:19 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 26 12:08:19 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 26 12:08:19 localhost systemd[1]: Starting Create System Users...
Nov 26 12:08:19 localhost systemd[1]: Starting Setup Virtual Console...
Nov 26 12:08:19 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 26 12:08:19 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 26 12:08:19 localhost systemd-journald[284]: Journal started
Nov 26 12:08:19 localhost systemd-journald[284]: Runtime Journal (/run/log/journal/0a08c8a3e2a843648947610c4936d879) is 8.0M, max 153.6M, 145.6M free.
Nov 26 12:08:19 localhost systemd[1]: Started Journal Service.
Nov 26 12:08:19 localhost systemd-sysusers[287]: Creating group 'users' with GID 100.
Nov 26 12:08:19 localhost systemd-sysusers[287]: Creating group 'dbus' with GID 81.
Nov 26 12:08:19 localhost systemd-sysusers[287]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 26 12:08:19 localhost systemd[1]: Finished Create System Users.
Nov 26 12:08:19 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 26 12:08:20 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 26 12:08:20 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 26 12:08:20 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 26 12:08:20 localhost systemd[1]: Finished Setup Virtual Console.
Nov 26 12:08:20 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 26 12:08:20 localhost systemd[1]: Starting dracut cmdline hook...
Nov 26 12:08:20 localhost dracut-cmdline[301]: dracut-9 dracut-057-102.git20250818.el9
Nov 26 12:08:20 localhost dracut-cmdline[301]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 26 12:08:20 localhost systemd[1]: Finished dracut cmdline hook.
Nov 26 12:08:20 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 26 12:08:20 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 26 12:08:20 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 26 12:08:20 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 26 12:08:20 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 26 12:08:20 localhost kernel: RPC: Registered udp transport module.
Nov 26 12:08:20 localhost kernel: RPC: Registered tcp transport module.
Nov 26 12:08:20 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 26 12:08:20 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 26 12:08:20 localhost rpc.statd[417]: Version 2.5.4 starting
Nov 26 12:08:20 localhost rpc.statd[417]: Initializing NSM state
Nov 26 12:08:20 localhost rpc.idmapd[422]: Setting log level to 0
Nov 26 12:08:20 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 26 12:08:20 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 26 12:08:20 localhost systemd-udevd[435]: Using default interface naming scheme 'rhel-9.0'.
Nov 26 12:08:20 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 26 12:08:20 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 26 12:08:20 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 26 12:08:20 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 26 12:08:20 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 26 12:08:20 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 26 12:08:20 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 26 12:08:20 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 26 12:08:20 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 26 12:08:20 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 26 12:08:20 localhost systemd[1]: Reached target Network.
Nov 26 12:08:20 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 26 12:08:20 localhost systemd[1]: Starting dracut initqueue hook...
Nov 26 12:08:20 localhost kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Nov 26 12:08:20 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 26 12:08:20 localhost kernel:  vda: vda1
Nov 26 12:08:20 localhost systemd-udevd[456]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 12:08:20 localhost systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 26 12:08:20 localhost systemd[1]: Reached target Initrd Root Device.
Nov 26 12:08:20 localhost kernel: libata version 3.00 loaded.
Nov 26 12:08:20 localhost kernel: ahci 0000:00:1f.2: version 3.0
Nov 26 12:08:20 localhost kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Nov 26 12:08:20 localhost kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Nov 26 12:08:20 localhost kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Nov 26 12:08:20 localhost kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Nov 26 12:08:20 localhost kernel: scsi host0: ahci
Nov 26 12:08:20 localhost kernel: scsi host1: ahci
Nov 26 12:08:20 localhost kernel: scsi host2: ahci
Nov 26 12:08:20 localhost kernel: scsi host3: ahci
Nov 26 12:08:20 localhost kernel: scsi host4: ahci
Nov 26 12:08:20 localhost kernel: scsi host5: ahci
Nov 26 12:08:20 localhost kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 49 lpm-pol 0
Nov 26 12:08:20 localhost kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 49 lpm-pol 0
Nov 26 12:08:20 localhost kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 49 lpm-pol 0
Nov 26 12:08:20 localhost kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 49 lpm-pol 0
Nov 26 12:08:20 localhost kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 49 lpm-pol 0
Nov 26 12:08:20 localhost kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 49 lpm-pol 0
Nov 26 12:08:20 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 26 12:08:20 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 26 12:08:20 localhost systemd[1]: Reached target System Initialization.
Nov 26 12:08:20 localhost systemd[1]: Reached target Basic System.
Nov 26 12:08:21 localhost kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Nov 26 12:08:21 localhost kernel: ata4: SATA link down (SStatus 0 SControl 300)
Nov 26 12:08:21 localhost kernel: ata2: SATA link down (SStatus 0 SControl 300)
Nov 26 12:08:21 localhost kernel: ata5: SATA link down (SStatus 0 SControl 300)
Nov 26 12:08:21 localhost kernel: ata6: SATA link down (SStatus 0 SControl 300)
Nov 26 12:08:21 localhost kernel: ata3: SATA link down (SStatus 0 SControl 300)
Nov 26 12:08:21 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 26 12:08:21 localhost kernel: ata1.00: applying bridge limits
Nov 26 12:08:21 localhost kernel: ata1.00: configured for UDMA/100
Nov 26 12:08:21 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 26 12:08:21 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 26 12:08:21 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 26 12:08:21 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 26 12:08:21 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 26 12:08:21 localhost systemd[1]: Finished dracut initqueue hook.
Nov 26 12:08:21 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 26 12:08:21 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 26 12:08:21 localhost systemd[1]: Reached target Remote File Systems.
Nov 26 12:08:21 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 26 12:08:21 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 26 12:08:21 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 26 12:08:21 localhost systemd-fsck[526]: /usr/sbin/fsck.xfs: XFS file system.
Nov 26 12:08:21 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 26 12:08:21 localhost systemd[1]: Mounting /sysroot...
Nov 26 12:08:21 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 26 12:08:21 localhost kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 26 12:08:21 localhost kernel: XFS (vda1): Ending clean mount
Nov 26 12:08:21 localhost systemd[1]: Mounted /sysroot.
Nov 26 12:08:21 localhost systemd[1]: Reached target Initrd Root File System.
Nov 26 12:08:21 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 26 12:08:21 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 26 12:08:21 localhost systemd[1]: Reached target Initrd File Systems.
Nov 26 12:08:21 localhost systemd[1]: Reached target Initrd Default Target.
Nov 26 12:08:21 localhost systemd[1]: Starting dracut mount hook...
Nov 26 12:08:21 localhost systemd[1]: Finished dracut mount hook.
Nov 26 12:08:21 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 26 12:08:21 localhost rpc.idmapd[422]: exiting on signal 15
Nov 26 12:08:21 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 26 12:08:21 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 26 12:08:21 localhost systemd[1]: Stopped target Network.
Nov 26 12:08:21 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 26 12:08:21 localhost systemd[1]: Stopped target Timer Units.
Nov 26 12:08:21 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 26 12:08:21 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 26 12:08:21 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 26 12:08:21 localhost systemd[1]: Stopped target Basic System.
Nov 26 12:08:21 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 26 12:08:21 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 26 12:08:21 localhost systemd[1]: Stopped target Path Units.
Nov 26 12:08:21 localhost systemd[1]: Stopped target Remote File Systems.
Nov 26 12:08:21 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 26 12:08:21 localhost systemd[1]: Stopped target Slice Units.
Nov 26 12:08:21 localhost systemd[1]: Stopped target Socket Units.
Nov 26 12:08:21 localhost systemd[1]: Stopped target System Initialization.
Nov 26 12:08:21 localhost systemd[1]: Stopped target Local File Systems.
Nov 26 12:08:21 localhost systemd[1]: Stopped target Swaps.
Nov 26 12:08:21 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped dracut mount hook.
Nov 26 12:08:21 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 26 12:08:21 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 26 12:08:21 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 26 12:08:21 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 26 12:08:21 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 26 12:08:21 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 26 12:08:21 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 26 12:08:21 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 26 12:08:21 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 26 12:08:21 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 26 12:08:21 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 26 12:08:21 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 26 12:08:21 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Closed udev Control Socket.
Nov 26 12:08:21 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Closed udev Kernel Socket.
Nov 26 12:08:21 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 26 12:08:21 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 26 12:08:21 localhost systemd[1]: Starting Cleanup udev Database...
Nov 26 12:08:21 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 26 12:08:21 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 26 12:08:21 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Stopped Create System Users.
Nov 26 12:08:21 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 26 12:08:21 localhost systemd[1]: Finished Cleanup udev Database.
Nov 26 12:08:21 localhost systemd[1]: Reached target Switch Root.
Nov 26 12:08:21 localhost systemd[1]: Starting Switch Root...
Nov 26 12:08:21 localhost systemd[1]: Switching root.
Nov 26 12:08:21 localhost systemd-journald[284]: Journal stopped
Nov 26 12:08:22 localhost systemd-journald[284]: Received SIGTERM from PID 1 (systemd).
Nov 26 12:08:22 localhost kernel: audit: type=1404 audit(1764158901.858:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 26 12:08:22 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 12:08:22 localhost kernel: SELinux:  policy capability open_perms=1
Nov 26 12:08:22 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 12:08:22 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 26 12:08:22 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 12:08:22 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 12:08:22 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 12:08:22 localhost kernel: audit: type=1403 audit(1764158901.980:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 26 12:08:22 localhost systemd[1]: Successfully loaded SELinux policy in 125.106ms.
Nov 26 12:08:22 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.490ms.
Nov 26 12:08:22 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 26 12:08:22 localhost systemd[1]: Detected virtualization kvm.
Nov 26 12:08:22 localhost systemd[1]: Detected architecture x86-64.
Nov 26 12:08:22 localhost systemd-rc-local-generator[606]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:08:22 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 26 12:08:22 localhost systemd[1]: Stopped Switch Root.
Nov 26 12:08:22 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 26 12:08:22 localhost systemd[1]: Created slice Slice /system/getty.
Nov 26 12:08:22 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 26 12:08:22 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 26 12:08:22 localhost systemd[1]: Created slice User and Session Slice.
Nov 26 12:08:22 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 26 12:08:22 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 26 12:08:22 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 26 12:08:22 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 26 12:08:22 localhost systemd[1]: Stopped target Switch Root.
Nov 26 12:08:22 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 26 12:08:22 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 26 12:08:22 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 26 12:08:22 localhost systemd[1]: Reached target Path Units.
Nov 26 12:08:22 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 26 12:08:22 localhost systemd[1]: Reached target Slice Units.
Nov 26 12:08:22 localhost systemd[1]: Reached target Swaps.
Nov 26 12:08:22 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 26 12:08:22 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 26 12:08:22 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 26 12:08:22 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 26 12:08:22 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 26 12:08:22 localhost systemd[1]: Listening on udev Control Socket.
Nov 26 12:08:22 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 26 12:08:22 localhost systemd[1]: Mounting Huge Pages File System...
Nov 26 12:08:22 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 26 12:08:22 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 26 12:08:22 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 26 12:08:22 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 26 12:08:22 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 26 12:08:22 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 26 12:08:22 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 26 12:08:22 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 26 12:08:22 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 26 12:08:22 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 26 12:08:22 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 26 12:08:22 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 26 12:08:22 localhost systemd[1]: Stopped Journal Service.
Nov 26 12:08:22 localhost systemd[1]: Starting Journal Service...
Nov 26 12:08:22 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 26 12:08:22 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 26 12:08:22 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 26 12:08:22 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 26 12:08:22 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 26 12:08:22 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 26 12:08:22 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 26 12:08:22 localhost systemd[1]: Mounted Huge Pages File System.
Nov 26 12:08:22 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 26 12:08:22 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 26 12:08:22 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 26 12:08:22 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 26 12:08:22 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 26 12:08:22 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 26 12:08:22 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 26 12:08:22 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 26 12:08:22 localhost systemd-journald[647]: Journal started
Nov 26 12:08:22 localhost systemd-journald[647]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 26 12:08:22 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 26 12:08:22 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 26 12:08:22 localhost systemd[1]: Started Journal Service.
Nov 26 12:08:22 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 26 12:08:22 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 26 12:08:22 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 26 12:08:22 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 26 12:08:22 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 26 12:08:22 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 26 12:08:22 localhost kernel: ACPI: bus type drm_connector registered
Nov 26 12:08:22 localhost kernel: fuse: init (API version 7.37)
Nov 26 12:08:22 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 26 12:08:22 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 26 12:08:22 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 26 12:08:22 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 26 12:08:22 localhost systemd-journald[647]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 26 12:08:22 localhost systemd-journald[647]: Received client request to flush runtime journal.
Nov 26 12:08:22 localhost systemd[1]: Starting Create System Users...
Nov 26 12:08:22 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 26 12:08:22 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 26 12:08:22 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 26 12:08:22 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 26 12:08:22 localhost systemd[1]: Mounting FUSE Control File System...
Nov 26 12:08:22 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 26 12:08:22 localhost systemd[1]: Mounted FUSE Control File System.
Nov 26 12:08:22 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 26 12:08:22 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 26 12:08:22 localhost systemd[1]: Finished Create System Users.
Nov 26 12:08:22 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 26 12:08:22 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 26 12:08:22 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 26 12:08:22 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 26 12:08:22 localhost systemd[1]: Reached target Local File Systems.
Nov 26 12:08:22 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 26 12:08:22 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 26 12:08:22 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 26 12:08:22 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 26 12:08:22 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 26 12:08:22 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 26 12:08:22 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 26 12:08:22 localhost bootctl[664]: Couldn't find EFI system partition, skipping.
Nov 26 12:08:22 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 26 12:08:22 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 26 12:08:22 localhost systemd[1]: Starting Security Auditing Service...
Nov 26 12:08:22 localhost systemd[1]: Starting RPC Bind...
Nov 26 12:08:22 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 26 12:08:22 localhost auditd[670]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 26 12:08:22 localhost auditd[670]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 26 12:08:22 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 26 12:08:22 localhost systemd[1]: Started RPC Bind.
Nov 26 12:08:22 localhost augenrules[675]: /sbin/augenrules: No change
Nov 26 12:08:22 localhost augenrules[690]: No rules
Nov 26 12:08:22 localhost augenrules[690]: enabled 1
Nov 26 12:08:22 localhost augenrules[690]: failure 1
Nov 26 12:08:22 localhost augenrules[690]: pid 670
Nov 26 12:08:22 localhost augenrules[690]: rate_limit 0
Nov 26 12:08:22 localhost augenrules[690]: backlog_limit 8192
Nov 26 12:08:22 localhost augenrules[690]: lost 0
Nov 26 12:08:22 localhost augenrules[690]: backlog 0
Nov 26 12:08:22 localhost augenrules[690]: backlog_wait_time 60000
Nov 26 12:08:22 localhost augenrules[690]: backlog_wait_time_actual 0
Nov 26 12:08:22 localhost augenrules[690]: enabled 1
Nov 26 12:08:22 localhost augenrules[690]: failure 1
Nov 26 12:08:22 localhost augenrules[690]: pid 670
Nov 26 12:08:22 localhost augenrules[690]: rate_limit 0
Nov 26 12:08:22 localhost augenrules[690]: backlog_limit 8192
Nov 26 12:08:22 localhost augenrules[690]: lost 0
Nov 26 12:08:22 localhost augenrules[690]: backlog 0
Nov 26 12:08:22 localhost augenrules[690]: backlog_wait_time 60000
Nov 26 12:08:22 localhost augenrules[690]: backlog_wait_time_actual 0
Nov 26 12:08:22 localhost augenrules[690]: enabled 1
Nov 26 12:08:22 localhost augenrules[690]: failure 1
Nov 26 12:08:22 localhost augenrules[690]: pid 670
Nov 26 12:08:22 localhost augenrules[690]: rate_limit 0
Nov 26 12:08:22 localhost augenrules[690]: backlog_limit 8192
Nov 26 12:08:22 localhost augenrules[690]: lost 0
Nov 26 12:08:22 localhost augenrules[690]: backlog 0
Nov 26 12:08:22 localhost augenrules[690]: backlog_wait_time 60000
Nov 26 12:08:22 localhost augenrules[690]: backlog_wait_time_actual 0
Nov 26 12:08:22 localhost systemd[1]: Started Security Auditing Service.
Nov 26 12:08:22 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 26 12:08:22 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 26 12:08:22 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 26 12:08:22 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 26 12:08:22 localhost systemd-udevd[699]: Using default interface naming scheme 'rhel-9.0'.
Nov 26 12:08:22 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 26 12:08:22 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 26 12:08:22 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 26 12:08:22 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 26 12:08:22 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 26 12:08:22 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 26 12:08:22 localhost systemd[1]: Starting Update is Completed...
Nov 26 12:08:22 localhost systemd[1]: Finished Update is Completed.
Nov 26 12:08:22 localhost systemd-udevd[705]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 12:08:22 localhost kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Nov 26 12:08:23 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 26 12:08:23 localhost kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Nov 26 12:08:23 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 26 12:08:23 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 26 12:08:23 localhost kernel: iTCO_vendor_support: vendor-support=0
Nov 26 12:08:23 localhost kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Nov 26 12:08:23 localhost kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Nov 26 12:08:23 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Nov 26 12:08:23 localhost kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Nov 26 12:08:23 localhost kernel: Console: switching to colour dummy device 80x25
Nov 26 12:08:23 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 26 12:08:23 localhost kernel: [drm] features: -context_init
Nov 26 12:08:23 localhost kernel: [drm] number of scanouts: 1
Nov 26 12:08:23 localhost kernel: [drm] number of cap sets: 0
Nov 26 12:08:23 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Nov 26 12:08:23 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 26 12:08:23 localhost kernel: Console: switching to colour frame buffer device 160x50
Nov 26 12:08:23 localhost kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 26 12:08:23 localhost kernel: kvm_amd: TSC scaling supported
Nov 26 12:08:23 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 26 12:08:23 localhost kernel: kvm_amd: Nested Paging enabled
Nov 26 12:08:23 localhost kernel: kvm_amd: LBR virtualization supported
Nov 26 12:08:23 localhost kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Nov 26 12:08:23 localhost kernel: kvm_amd: Virtual GIF supported
Nov 26 12:08:23 localhost systemd[1]: Reached target System Initialization.
Nov 26 12:08:23 localhost systemd[1]: Started dnf makecache --timer.
Nov 26 12:08:23 localhost systemd[1]: Started Daily rotation of log files.
Nov 26 12:08:23 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 26 12:08:23 localhost systemd[1]: Reached target Timer Units.
Nov 26 12:08:23 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 26 12:08:23 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 26 12:08:23 localhost systemd[1]: Reached target Socket Units.
Nov 26 12:08:23 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 26 12:08:23 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 26 12:08:23 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 26 12:08:23 localhost systemd[1]: Reached target Basic System.
Nov 26 12:08:23 localhost dbus-broker-lau[766]: Ready
Nov 26 12:08:23 localhost systemd[1]: Starting NTP client/server...
Nov 26 12:08:23 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 26 12:08:23 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 26 12:08:23 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 26 12:08:23 localhost systemd[1]: Started irqbalance daemon.
Nov 26 12:08:23 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 26 12:08:23 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 12:08:23 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 12:08:23 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 12:08:23 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 26 12:08:23 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 26 12:08:23 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 26 12:08:23 localhost systemd[1]: Starting User Login Management...
Nov 26 12:08:23 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 26 12:08:23 localhost chronyd[784]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 26 12:08:23 localhost chronyd[784]: Loaded 0 symmetric keys
Nov 26 12:08:23 localhost chronyd[784]: Using right/UTC timezone to obtain leap second data
Nov 26 12:08:23 localhost chronyd[784]: Loaded seccomp filter (level 2)
Nov 26 12:08:23 localhost systemd[1]: Started NTP client/server.
Nov 26 12:08:23 localhost systemd-logind[777]: New seat seat0.
Nov 26 12:08:23 localhost systemd-logind[777]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 26 12:08:23 localhost systemd-logind[777]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 26 12:08:23 localhost systemd[1]: Started User Login Management.
Nov 26 12:08:23 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 26 12:08:23 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 26 12:08:23 localhost iptables.init[771]: iptables: Applying firewall rules: [  OK  ]
Nov 26 12:08:23 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 26 12:08:23 localhost cloud-init[794]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 26 Nov 2025 12:08:23 +0000. Up 5.49 seconds.
Nov 26 12:08:24 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 26 12:08:24 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 26 12:08:24 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp6y_bti08.mount: Deactivated successfully.
Nov 26 12:08:24 localhost systemd[1]: Starting Hostname Service...
Nov 26 12:08:24 localhost systemd[1]: Started Hostname Service.
Nov 26 12:08:24 np0005536586 systemd-hostnamed[808]: Hostname set to <np0005536586> (static)
Nov 26 12:08:24 np0005536586 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 26 12:08:24 np0005536586 systemd[1]: Reached target Preparation for Network.
Nov 26 12:08:24 np0005536586 systemd[1]: Starting Network Manager...
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3134] NetworkManager (version 1.54.1-1.el9) is starting... (boot:031c7117-1661-4641-8ff4-d1885bc6a83e)
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3137] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3222] manager[0x558140d6e080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3250] hostname: hostname: using hostnamed
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3250] hostname: static hostname changed from (none) to "np0005536586"
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3252] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3328] manager[0x558140d6e080]: rfkill: Wi-Fi hardware radio set enabled
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3328] manager[0x558140d6e080]: rfkill: WWAN hardware radio set enabled
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3373] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3373] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3374] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3374] manager: Networking is enabled by state file
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3375] settings: Loaded settings plugin: keyfile (internal)
Nov 26 12:08:24 np0005536586 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3409] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3436] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3455] dhcp: init: Using DHCP client 'internal'
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3458] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3470] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3480] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3487] device (lo): Activation: starting connection 'lo' (14d47366-79b4-47b4-8c24-e57561e2dedc)
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3495] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3499] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3521] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3528] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 26 12:08:24 np0005536586 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3532] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3535] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3538] device (eth0): carrier: link connected
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3541] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3546] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 26 12:08:24 np0005536586 systemd[1]: Started Network Manager.
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3558] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3562] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3562] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3566] manager: NetworkManager state is now CONNECTING
Nov 26 12:08:24 np0005536586 systemd[1]: Reached target Network.
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3569] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3575] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3580] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3585] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 26 12:08:24 np0005536586 systemd[1]: Starting Network Manager Wait Online...
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3611] dhcp4 (eth0): state changed new lease, address=192.168.26.109
Nov 26 12:08:24 np0005536586 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3619] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 26 12:08:24 np0005536586 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3692] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3697] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 26 12:08:24 np0005536586 NetworkManager[812]: <info>  [1764158904.3706] device (lo): Activation: successful, device activated.
Nov 26 12:08:24 np0005536586 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 26 12:08:24 np0005536586 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 26 12:08:24 np0005536586 systemd[1]: Reached target NFS client services.
Nov 26 12:08:24 np0005536586 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 26 12:08:24 np0005536586 systemd[1]: Reached target Remote File Systems.
Nov 26 12:08:24 np0005536586 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 26 12:08:26 np0005536586 NetworkManager[812]: <info>  [1764158906.0745] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 12:08:27 np0005536586 NetworkManager[812]: <info>  [1764158907.1208] dhcp6 (eth0): state changed new lease, address=2001:db8::f0
Nov 26 12:08:28 np0005536586 NetworkManager[812]: <info>  [1764158908.9552] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 12:08:28 np0005536586 NetworkManager[812]: <info>  [1764158908.9585] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 12:08:28 np0005536586 NetworkManager[812]: <info>  [1764158908.9587] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 12:08:28 np0005536586 NetworkManager[812]: <info>  [1764158908.9591] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 12:08:28 np0005536586 NetworkManager[812]: <info>  [1764158908.9595] device (eth0): Activation: successful, device activated.
Nov 26 12:08:28 np0005536586 NetworkManager[812]: <info>  [1764158908.9600] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 26 12:08:28 np0005536586 NetworkManager[812]: <info>  [1764158908.9603] manager: startup complete
Nov 26 12:08:28 np0005536586 systemd[1]: Finished Network Manager Wait Online.
Nov 26 12:08:28 np0005536586 systemd[1]: Starting Cloud-init: Network Stage...
Nov 26 12:08:29 np0005536586 cloud-init[878]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 26 Nov 2025 12:08:29 +0000. Up 10.85 seconds.
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: |  eth0  | True |        192.168.26.109        | 255.255.255.0 | global | fa:16:3e:a4:16:5c |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: |  eth0  | True |       2001:db8::f0/128       |       .       | global | fa:16:3e:a4:16:5c |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: |  eth0  | True | fe80::f816:3eff:fea4:165c/64 |       .       |  link  | fa:16:3e:a4:16:5c |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: |   0   |     0.0.0.0     | 192.168.26.1 |     0.0.0.0     |    eth0   |   UG  |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: |   1   | 169.254.169.254 | 192.168.26.2 | 255.255.255.255 |    eth0   |  UGH  |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: |   2   |   192.168.26.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: +++++++++++++++++++++Route IPv6 info++++++++++++++++++++++
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: +-------+--------------+-------------+-----------+-------+
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: | Route | Destination  |   Gateway   | Interface | Flags |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: +-------+--------------+-------------+-----------+-------+
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: |   1   | 2001:db8::1  |      ::     |    eth0   |   U   |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: |   2   | 2001:db8::f0 |      ::     |    eth0   |   U   |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: |   3   |  fe80::/64   |      ::     |    eth0   |   U   |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: |   4   |     ::/0     | 2001:db8::1 |    eth0   |   UG  |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: |   6   |    local     |      ::     |    eth0   |   U   |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: |   7   |    local     |      ::     |    eth0   |   U   |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: |   8   |  multicast   |      ::     |    eth0   |   U   |
Nov 26 12:08:29 np0005536586 cloud-init[878]: ci-info: +-------+--------------+-------------+-----------+-------+
Nov 26 12:08:29 np0005536586 chronyd[784]: Selected source 50.117.3.95 (2.centos.pool.ntp.org)
Nov 26 12:08:29 np0005536586 chronyd[784]: System clock TAI offset set to 37 seconds
Nov 26 12:08:29 np0005536586 useradd[945]: new group: name=cloud-user, GID=1001
Nov 26 12:08:29 np0005536586 useradd[945]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 26 12:08:29 np0005536586 useradd[945]: add 'cloud-user' to group 'adm'
Nov 26 12:08:29 np0005536586 useradd[945]: add 'cloud-user' to group 'systemd-journal'
Nov 26 12:08:29 np0005536586 useradd[945]: add 'cloud-user' to shadow group 'adm'
Nov 26 12:08:29 np0005536586 useradd[945]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 26 12:08:30 np0005536586 cloud-init[878]: Generating public/private rsa key pair.
Nov 26 12:08:30 np0005536586 cloud-init[878]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 26 12:08:30 np0005536586 cloud-init[878]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 26 12:08:30 np0005536586 cloud-init[878]: The key fingerprint is:
Nov 26 12:08:30 np0005536586 cloud-init[878]: SHA256:oU4bkBjGlPS30WixaiIId9EVSm/KlqgkMBmkj/8DaVY root@np0005536586
Nov 26 12:08:30 np0005536586 cloud-init[878]: The key's randomart image is:
Nov 26 12:08:30 np0005536586 cloud-init[878]: +---[RSA 3072]----+
Nov 26 12:08:30 np0005536586 cloud-init[878]: |o==...o.o.       |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |.+o+ +.B         |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |* o = B =        |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |+= .EB B .       |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |= +o+ X S        |
Nov 26 12:08:30 np0005536586 cloud-init[878]: | ==+ + o         |
Nov 26 12:08:30 np0005536586 cloud-init[878]: | oo.  o          |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |   ..            |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |    ..           |
Nov 26 12:08:30 np0005536586 cloud-init[878]: +----[SHA256]-----+
Nov 26 12:08:30 np0005536586 cloud-init[878]: Generating public/private ecdsa key pair.
Nov 26 12:08:30 np0005536586 cloud-init[878]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 26 12:08:30 np0005536586 cloud-init[878]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 26 12:08:30 np0005536586 cloud-init[878]: The key fingerprint is:
Nov 26 12:08:30 np0005536586 cloud-init[878]: SHA256:d+JWs5Z695iDCleHiLz+sFmjVrvm69RLs8wy0o4DSSI root@np0005536586
Nov 26 12:08:30 np0005536586 cloud-init[878]: The key's randomart image is:
Nov 26 12:08:30 np0005536586 cloud-init[878]: +---[ECDSA 256]---+
Nov 26 12:08:30 np0005536586 cloud-init[878]: |                 |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |                 |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |                 |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |  E . .. . . .   |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |   . o .S + * .  |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |      o  +.* =   |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |       .+oB.B.   |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |       .+%*Bo+oo |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |       .*OXB=.oo.|
Nov 26 12:08:30 np0005536586 cloud-init[878]: +----[SHA256]-----+
Nov 26 12:08:30 np0005536586 cloud-init[878]: Generating public/private ed25519 key pair.
Nov 26 12:08:30 np0005536586 cloud-init[878]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 26 12:08:30 np0005536586 cloud-init[878]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 26 12:08:30 np0005536586 cloud-init[878]: The key fingerprint is:
Nov 26 12:08:30 np0005536586 cloud-init[878]: SHA256:Q2i8rhY/KJ/97GBHBvbjpafCAaO8LMKHbCxO+TEFq04 root@np0005536586
Nov 26 12:08:30 np0005536586 cloud-init[878]: The key's randomart image is:
Nov 26 12:08:30 np0005536586 cloud-init[878]: +--[ED25519 256]--+
Nov 26 12:08:30 np0005536586 cloud-init[878]: |                 |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |     . .         |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |   .  * .        |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |    =o =         |
Nov 26 12:08:30 np0005536586 cloud-init[878]: | . o +. S .      |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |  = o..+ =       |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |+E.+ =+.+ .      |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |B*=.===+ o       |
Nov 26 12:08:30 np0005536586 cloud-init[878]: |=+.=+ .==        |
Nov 26 12:08:30 np0005536586 cloud-init[878]: +----[SHA256]-----+
Nov 26 12:08:30 np0005536586 systemd[1]: Finished Cloud-init: Network Stage.
Nov 26 12:08:30 np0005536586 systemd[1]: Reached target Cloud-config availability.
Nov 26 12:08:30 np0005536586 systemd[1]: Reached target Network is Online.
Nov 26 12:08:30 np0005536586 systemd[1]: Starting Cloud-init: Config Stage...
Nov 26 12:08:30 np0005536586 systemd[1]: Starting Crash recovery kernel arming...
Nov 26 12:08:30 np0005536586 systemd[1]: Starting Notify NFS peers of a restart...
Nov 26 12:08:30 np0005536586 systemd[1]: Starting System Logging Service...
Nov 26 12:08:30 np0005536586 sm-notify[961]: Version 2.5.4 starting
Nov 26 12:08:30 np0005536586 systemd[1]: Starting OpenSSH server daemon...
Nov 26 12:08:30 np0005536586 systemd[1]: Starting Permit User Sessions...
Nov 26 12:08:30 np0005536586 systemd[1]: Started Notify NFS peers of a restart.
Nov 26 12:08:30 np0005536586 sshd[963]: Server listening on 0.0.0.0 port 22.
Nov 26 12:08:30 np0005536586 sshd[963]: Server listening on :: port 22.
Nov 26 12:08:30 np0005536586 systemd[1]: Started OpenSSH server daemon.
Nov 26 12:08:30 np0005536586 systemd[1]: Finished Permit User Sessions.
Nov 26 12:08:30 np0005536586 systemd[1]: Started Command Scheduler.
Nov 26 12:08:30 np0005536586 systemd[1]: Started Getty on tty1.
Nov 26 12:08:30 np0005536586 systemd[1]: Started Serial Getty on ttyS0.
Nov 26 12:08:30 np0005536586 systemd[1]: Reached target Login Prompts.
Nov 26 12:08:30 np0005536586 crond[966]: (CRON) STARTUP (1.5.7)
Nov 26 12:08:30 np0005536586 crond[966]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 26 12:08:30 np0005536586 crond[966]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 99% if used.)
Nov 26 12:08:30 np0005536586 crond[966]: (CRON) INFO (running with inotify support)
Nov 26 12:08:30 np0005536586 rsyslogd[962]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="962" x-info="https://www.rsyslog.com"] start
Nov 26 12:08:30 np0005536586 rsyslogd[962]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 26 12:08:30 np0005536586 systemd[1]: Started System Logging Service.
Nov 26 12:08:30 np0005536586 systemd[1]: Reached target Multi-User System.
Nov 26 12:08:30 np0005536586 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 26 12:08:30 np0005536586 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 26 12:08:30 np0005536586 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 26 12:08:30 np0005536586 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 12:08:30 np0005536586 kdumpctl[974]: kdump: No kdump initial ramdisk found.
Nov 26 12:08:30 np0005536586 kdumpctl[974]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 26 12:08:30 np0005536586 chronyd[784]: Selected source 204.9.54.119 (2.centos.pool.ntp.org)
Nov 26 12:08:30 np0005536586 cloud-init[1084]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 26 Nov 2025 12:08:30 +0000. Up 12.31 seconds.
Nov 26 12:08:30 np0005536586 systemd[1]: Finished Cloud-init: Config Stage.
Nov 26 12:08:30 np0005536586 systemd[1]: Starting Cloud-init: Final Stage...
Nov 26 12:08:30 np0005536586 dracut[1222]: dracut-057-102.git20250818.el9
Nov 26 12:08:31 np0005536586 cloud-init[1240]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 26 Nov 2025 12:08:31 +0000. Up 12.68 seconds.
Nov 26 12:08:31 np0005536586 dracut[1224]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 26 12:08:31 np0005536586 cloud-init[1256]: #############################################################
Nov 26 12:08:31 np0005536586 cloud-init[1259]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 26 12:08:31 np0005536586 cloud-init[1265]: 256 SHA256:d+JWs5Z695iDCleHiLz+sFmjVrvm69RLs8wy0o4DSSI root@np0005536586 (ECDSA)
Nov 26 12:08:31 np0005536586 cloud-init[1269]: 256 SHA256:Q2i8rhY/KJ/97GBHBvbjpafCAaO8LMKHbCxO+TEFq04 root@np0005536586 (ED25519)
Nov 26 12:08:31 np0005536586 cloud-init[1276]: 3072 SHA256:oU4bkBjGlPS30WixaiIId9EVSm/KlqgkMBmkj/8DaVY root@np0005536586 (RSA)
Nov 26 12:08:31 np0005536586 cloud-init[1277]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 26 12:08:31 np0005536586 cloud-init[1278]: #############################################################
Nov 26 12:08:31 np0005536586 cloud-init[1240]: Cloud-init v. 24.4-7.el9 finished at Wed, 26 Nov 2025 12:08:31 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 12.83 seconds
Nov 26 12:08:31 np0005536586 systemd[1]: Finished Cloud-init: Final Stage.
Nov 26 12:08:31 np0005536586 systemd[1]: Reached target Cloud-init target.
Nov 26 12:08:31 np0005536586 sshd-session[1315]: Connection closed by 192.168.26.11 port 50654 [preauth]
Nov 26 12:08:31 np0005536586 sshd-session[1321]: Unable to negotiate with 192.168.26.11 port 50668: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 26 12:08:31 np0005536586 sshd-session[1326]: Connection closed by 192.168.26.11 port 50678 [preauth]
Nov 26 12:08:31 np0005536586 sshd-session[1331]: Unable to negotiate with 192.168.26.11 port 50680: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 26 12:08:31 np0005536586 sshd-session[1335]: Unable to negotiate with 192.168.26.11 port 50684: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 26 12:08:31 np0005536586 sshd-session[1351]: Unable to negotiate with 192.168.26.11 port 50714: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 26 12:08:31 np0005536586 sshd-session[1355]: Unable to negotiate with 192.168.26.11 port 50724: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 26 12:08:31 np0005536586 sshd-session[1338]: Connection closed by 192.168.26.11 port 50688 [preauth]
Nov 26 12:08:31 np0005536586 sshd-session[1345]: Connection closed by 192.168.26.11 port 50700 [preauth]
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 26 12:08:31 np0005536586 dracut[1224]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 26 12:08:31 np0005536586 dracut[1224]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 26 12:08:31 np0005536586 dracut[1224]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 26 12:08:31 np0005536586 dracut[1224]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: memstrack is not available
Nov 26 12:08:32 np0005536586 dracut[1224]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 26 12:08:32 np0005536586 dracut[1224]: memstrack is not available
Nov 26 12:08:32 np0005536586 dracut[1224]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 26 12:08:32 np0005536586 dracut[1224]: *** Including module: systemd ***
Nov 26 12:08:32 np0005536586 dracut[1224]: *** Including module: fips ***
Nov 26 12:08:32 np0005536586 dracut[1224]: *** Including module: systemd-initrd ***
Nov 26 12:08:32 np0005536586 dracut[1224]: *** Including module: i18n ***
Nov 26 12:08:33 np0005536586 dracut[1224]: *** Including module: drm ***
Nov 26 12:08:33 np0005536586 irqbalance[772]: Cannot change IRQ 48 affinity: Operation not permitted
Nov 26 12:08:33 np0005536586 irqbalance[772]: IRQ 48 affinity is now unmanaged
Nov 26 12:08:33 np0005536586 irqbalance[772]: Cannot change IRQ 46 affinity: Operation not permitted
Nov 26 12:08:33 np0005536586 irqbalance[772]: IRQ 46 affinity is now unmanaged
Nov 26 12:08:33 np0005536586 dracut[1224]: *** Including module: prefixdevname ***
Nov 26 12:08:33 np0005536586 dracut[1224]: *** Including module: kernel-modules ***
Nov 26 12:08:33 np0005536586 kernel: block vda: the capability attribute has been deprecated.
Nov 26 12:08:34 np0005536586 dracut[1224]: *** Including module: kernel-modules-extra ***
Nov 26 12:08:34 np0005536586 dracut[1224]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 26 12:08:34 np0005536586 dracut[1224]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 26 12:08:34 np0005536586 dracut[1224]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 26 12:08:34 np0005536586 dracut[1224]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 26 12:08:34 np0005536586 dracut[1224]: *** Including module: qemu ***
Nov 26 12:08:34 np0005536586 dracut[1224]: *** Including module: fstab-sys ***
Nov 26 12:08:34 np0005536586 dracut[1224]: *** Including module: rootfs-block ***
Nov 26 12:08:34 np0005536586 dracut[1224]: *** Including module: terminfo ***
Nov 26 12:08:34 np0005536586 dracut[1224]: *** Including module: udev-rules ***
Nov 26 12:08:34 np0005536586 dracut[1224]: Skipping udev rule: 91-permissions.rules
Nov 26 12:08:34 np0005536586 dracut[1224]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 26 12:08:34 np0005536586 dracut[1224]: *** Including module: virtiofs ***
Nov 26 12:08:34 np0005536586 dracut[1224]: *** Including module: dracut-systemd ***
Nov 26 12:08:34 np0005536586 dracut[1224]: *** Including module: usrmount ***
Nov 26 12:08:34 np0005536586 dracut[1224]: *** Including module: base ***
Nov 26 12:08:34 np0005536586 dracut[1224]: *** Including module: fs-lib ***
Nov 26 12:08:34 np0005536586 dracut[1224]: *** Including module: kdumpbase ***
Nov 26 12:08:35 np0005536586 dracut[1224]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 26 12:08:35 np0005536586 dracut[1224]:   microcode_ctl module: mangling fw_dir
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: configuration "intel" is ignored
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 26 12:08:35 np0005536586 dracut[1224]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 26 12:08:35 np0005536586 dracut[1224]: *** Including module: openssl ***
Nov 26 12:08:35 np0005536586 dracut[1224]: *** Including module: shutdown ***
Nov 26 12:08:35 np0005536586 dracut[1224]: *** Including module: squash ***
Nov 26 12:08:35 np0005536586 dracut[1224]: *** Including modules done ***
Nov 26 12:08:35 np0005536586 dracut[1224]: *** Installing kernel module dependencies ***
Nov 26 12:08:36 np0005536586 dracut[1224]: *** Installing kernel module dependencies done ***
Nov 26 12:08:36 np0005536586 dracut[1224]: *** Resolving executable dependencies ***
Nov 26 12:08:37 np0005536586 dracut[1224]: *** Resolving executable dependencies done ***
Nov 26 12:08:37 np0005536586 dracut[1224]: *** Generating early-microcode cpio image ***
Nov 26 12:08:37 np0005536586 dracut[1224]: *** Store current command line parameters ***
Nov 26 12:08:37 np0005536586 dracut[1224]: Stored kernel commandline:
Nov 26 12:08:37 np0005536586 dracut[1224]: No dracut internal kernel commandline stored in the initramfs
Nov 26 12:08:37 np0005536586 dracut[1224]: *** Install squash loader ***
Nov 26 12:08:38 np0005536586 dracut[1224]: *** Squashing the files inside the initramfs ***
Nov 26 12:08:39 np0005536586 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 12:08:39 np0005536586 dracut[1224]: *** Squashing the files inside the initramfs done ***
Nov 26 12:08:39 np0005536586 dracut[1224]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 26 12:08:39 np0005536586 dracut[1224]: *** Hardlinking files ***
Nov 26 12:08:39 np0005536586 dracut[1224]: Mode:           real
Nov 26 12:08:39 np0005536586 dracut[1224]: Files:          50
Nov 26 12:08:39 np0005536586 dracut[1224]: Linked:         0 files
Nov 26 12:08:39 np0005536586 dracut[1224]: Compared:       0 xattrs
Nov 26 12:08:39 np0005536586 dracut[1224]: Compared:       0 files
Nov 26 12:08:39 np0005536586 dracut[1224]: Saved:          0 B
Nov 26 12:08:39 np0005536586 dracut[1224]: Duration:       0.000378 seconds
Nov 26 12:08:39 np0005536586 dracut[1224]: *** Hardlinking files done ***
Nov 26 12:08:39 np0005536586 dracut[1224]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 26 12:08:40 np0005536586 kdumpctl[974]: kdump: kexec: loaded kdump kernel
Nov 26 12:08:40 np0005536586 kdumpctl[974]: kdump: Starting kdump: [OK]
Nov 26 12:08:40 np0005536586 systemd[1]: Finished Crash recovery kernel arming.
Nov 26 12:08:40 np0005536586 systemd[1]: Startup finished in 1.378s (kernel) + 2.089s (initrd) + 18.345s (userspace) = 21.813s.
Nov 26 12:08:54 np0005536586 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 12:08:56 np0005536586 sshd-session[4369]: Accepted publickey for zuul from 192.168.26.12 port 51796 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 26 12:08:56 np0005536586 systemd[1]: Created slice User Slice of UID 1000.
Nov 26 12:08:56 np0005536586 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 26 12:08:56 np0005536586 systemd-logind[777]: New session 1 of user zuul.
Nov 26 12:08:56 np0005536586 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 26 12:08:56 np0005536586 systemd[1]: Starting User Manager for UID 1000...
Nov 26 12:08:56 np0005536586 systemd[4373]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:08:56 np0005536586 systemd[4373]: Queued start job for default target Main User Target.
Nov 26 12:08:56 np0005536586 systemd[4373]: Created slice User Application Slice.
Nov 26 12:08:56 np0005536586 systemd[4373]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 26 12:08:56 np0005536586 systemd[4373]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 12:08:56 np0005536586 systemd[4373]: Reached target Paths.
Nov 26 12:08:56 np0005536586 systemd[4373]: Reached target Timers.
Nov 26 12:08:56 np0005536586 systemd[4373]: Starting D-Bus User Message Bus Socket...
Nov 26 12:08:56 np0005536586 systemd[4373]: Starting Create User's Volatile Files and Directories...
Nov 26 12:08:56 np0005536586 systemd[4373]: Listening on D-Bus User Message Bus Socket.
Nov 26 12:08:56 np0005536586 systemd[4373]: Finished Create User's Volatile Files and Directories.
Nov 26 12:08:56 np0005536586 systemd[4373]: Reached target Sockets.
Nov 26 12:08:56 np0005536586 systemd[4373]: Reached target Basic System.
Nov 26 12:08:56 np0005536586 systemd[1]: Started User Manager for UID 1000.
Nov 26 12:08:56 np0005536586 systemd[4373]: Reached target Main User Target.
Nov 26 12:08:56 np0005536586 systemd[4373]: Startup finished in 82ms.
Nov 26 12:08:56 np0005536586 systemd[1]: Started Session 1 of User zuul.
Nov 26 12:08:56 np0005536586 sshd-session[4369]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:08:56 np0005536586 python3[4455]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:08:58 np0005536586 python3[4483]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:09:03 np0005536586 python3[4537]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:09:04 np0005536586 python3[4577]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 26 12:09:05 np0005536586 python3[4603]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/RTVn/2oEGi9ibprRz+YqtX7woECYN+wj7oUywpfYq5KGdLJihwNbBYV9L1X4mAkzE+3kz9cRFBvPOBGmGx4SJoagnPPHf7ezYYCOJ4rvqZj/pPU7S/e1VN3+BJvq7NLAWumkwT5WTT+OxWyTg9hLyt1Pdexi3qsS+MdDiveQ6at0kCI3ictJsXIAnY2la8fjhIEtwXczzm22FLjclKsYMa/PBO+YRMjptc9xCtzoLIGJJk1nZ9JC8PPla0AAMSdqdPPqP68Dyaqr79tb43rKyMN1M+Oo6sNNCg409ijwukDoiKqy8S8gxdPMZV483hzkaX7oAWL3A8bQsaxSLMag/XL375u6KQjfVeNrPTT28v7UsWS2+2+gWg7NWlJuyUBXH0Tn/kjBqzmmUJ934MjXKMsEWjjB5yeJYfRL8OwluBoJswqMCsg2HwWbzakrFZsdgL0kcbGYcZLm0hhwGz3xhqfoRFhUcW1LSOM3DacF3uYbLSOzHb4AkpLXlVJ5nNs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:05 np0005536586 python3[4627]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:06 np0005536586 python3[4726]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:09:06 np0005536586 python3[4797]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764158946.1064787-207-99758545323696/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=a8484921f15249798e152441754b1550_id_rsa follow=False checksum=a867e1b24f47bae0626df00812743af234cdb57e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:06 np0005536586 python3[4920]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:09:07 np0005536586 python3[4991]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764158946.744187-240-119413442670033/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=a8484921f15249798e152441754b1550_id_rsa.pub follow=False checksum=a73671b2ca98633b83b685ceafe390a2024552ca backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:08 np0005536586 python3[5039]: ansible-ping Invoked with data=pong
Nov 26 12:09:08 np0005536586 python3[5063]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:09:10 np0005536586 python3[5117]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 26 12:09:11 np0005536586 python3[5149]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:11 np0005536586 python3[5173]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:11 np0005536586 python3[5197]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:11 np0005536586 python3[5221]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:11 np0005536586 python3[5245]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:12 np0005536586 python3[5269]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:13 np0005536586 sudo[5293]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roqyusyvkzydyvbnvhbtlwazmqwojuyo ; /usr/bin/python3'
Nov 26 12:09:13 np0005536586 sudo[5293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:09:13 np0005536586 irqbalance[772]: Cannot change IRQ 47 affinity: Operation not permitted
Nov 26 12:09:13 np0005536586 irqbalance[772]: IRQ 47 affinity is now unmanaged
Nov 26 12:09:13 np0005536586 python3[5295]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:13 np0005536586 sudo[5293]: pam_unix(sudo:session): session closed for user root
Nov 26 12:09:13 np0005536586 sudo[5371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbtkwkupmsrhlqohplxlnrjisnmstamh ; /usr/bin/python3'
Nov 26 12:09:13 np0005536586 sudo[5371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:09:13 np0005536586 python3[5373]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:09:13 np0005536586 sudo[5371]: pam_unix(sudo:session): session closed for user root
Nov 26 12:09:14 np0005536586 sudo[5444]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktreabupbdbnudwxarcpiduaxonibcra ; /usr/bin/python3'
Nov 26 12:09:14 np0005536586 sudo[5444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:09:14 np0005536586 python3[5446]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764158953.537438-21-100867771414356/source follow=False _original_basename=mirror_info.sh.j2 checksum=3f92644b791816833989d215b9a84c589a7b8ebd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:14 np0005536586 sudo[5444]: pam_unix(sudo:session): session closed for user root
Nov 26 12:09:14 np0005536586 python3[5494]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:14 np0005536586 python3[5518]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:14 np0005536586 python3[5542]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:15 np0005536586 python3[5566]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:15 np0005536586 python3[5590]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:15 np0005536586 python3[5614]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:15 np0005536586 python3[5638]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:16 np0005536586 python3[5662]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:16 np0005536586 python3[5686]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:16 np0005536586 python3[5710]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:16 np0005536586 python3[5734]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:16 np0005536586 python3[5758]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:17 np0005536586 python3[5782]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:17 np0005536586 python3[5806]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:17 np0005536586 python3[5830]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:17 np0005536586 python3[5854]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:17 np0005536586 python3[5878]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:17 np0005536586 python3[5902]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:18 np0005536586 python3[5926]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:18 np0005536586 python3[5950]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:18 np0005536586 python3[5974]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:18 np0005536586 python3[5998]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:18 np0005536586 python3[6022]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:19 np0005536586 python3[6046]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:19 np0005536586 python3[6070]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:19 np0005536586 python3[6094]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:09:22 np0005536586 sudo[6118]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhzzlcifqyyrnkukvoslidksccuezqfl ; /usr/bin/python3'
Nov 26 12:09:22 np0005536586 sudo[6118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:09:22 np0005536586 python3[6120]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 26 12:09:22 np0005536586 systemd[1]: Starting Time & Date Service...
Nov 26 12:09:22 np0005536586 systemd[1]: Started Time & Date Service.
Nov 26 12:09:22 np0005536586 systemd-timedated[6122]: Changed time zone to 'UTC' (UTC).
Nov 26 12:09:22 np0005536586 sudo[6118]: pam_unix(sudo:session): session closed for user root
Nov 26 12:09:22 np0005536586 sudo[6149]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gomdqjwgpomibgijqgzmroibydbsqvms ; /usr/bin/python3'
Nov 26 12:09:22 np0005536586 sudo[6149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:09:22 np0005536586 python3[6151]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:22 np0005536586 sudo[6149]: pam_unix(sudo:session): session closed for user root
Nov 26 12:09:23 np0005536586 python3[6227]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:09:23 np0005536586 python3[6298]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764158962.9836133-153-23651387187932/source _original_basename=tmp39qqcbuz follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:23 np0005536586 python3[6398]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:09:23 np0005536586 python3[6469]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764158963.5828688-183-261698095444332/source _original_basename=tmptacta_pp follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:24 np0005536586 sudo[6569]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytmzoxtqoirnamzalqqjofbgtgfdcwqf ; /usr/bin/python3'
Nov 26 12:09:24 np0005536586 sudo[6569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:09:24 np0005536586 python3[6571]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:09:24 np0005536586 sudo[6569]: pam_unix(sudo:session): session closed for user root
Nov 26 12:09:24 np0005536586 sudo[6642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brorfvnxrmkeegnwlzddmjbgpkuugshf ; /usr/bin/python3'
Nov 26 12:09:24 np0005536586 sudo[6642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:09:24 np0005536586 python3[6644]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764158964.3616507-231-277102431289385/source _original_basename=tmp8hlujy0p follow=False checksum=43d6bf474fe3176ca4d99e899bb0d692cb0324b7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:24 np0005536586 sudo[6642]: pam_unix(sudo:session): session closed for user root
Nov 26 12:09:25 np0005536586 python3[6692]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:09:25 np0005536586 python3[6718]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:09:25 np0005536586 sudo[6796]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezxgfcfkjvhtfcywfbxurkvuzqgujutn ; /usr/bin/python3'
Nov 26 12:09:25 np0005536586 sudo[6796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:09:25 np0005536586 python3[6798]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:09:25 np0005536586 sudo[6796]: pam_unix(sudo:session): session closed for user root
Nov 26 12:09:25 np0005536586 sudo[6869]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twnnfyphtgliwuncdtghocjtjxhvsrse ; /usr/bin/python3'
Nov 26 12:09:25 np0005536586 sudo[6869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:09:26 np0005536586 python3[6871]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764158965.6637838-273-276585713401862/source _original_basename=tmpqspoaqlu follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:26 np0005536586 sudo[6869]: pam_unix(sudo:session): session closed for user root
Nov 26 12:09:26 np0005536586 sudo[6920]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyctdwokxqvyswvkvavpbttlyorztenx ; /usr/bin/python3'
Nov 26 12:09:26 np0005536586 sudo[6920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:09:26 np0005536586 python3[6922]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e08-49e2-e995-ab1c-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:09:26 np0005536586 sudo[6920]: pam_unix(sudo:session): session closed for user root
Nov 26 12:09:27 np0005536586 python3[6950]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                             _uses_shell=True zuul_log_id=fa163e08-49e2-e995-ab1c-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 26 12:09:28 np0005536586 python3[6978]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:43 np0005536586 sudo[7002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qasraivyigorrjnnirapvbqjkqghfxft ; /usr/bin/python3'
Nov 26 12:09:43 np0005536586 sudo[7002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:09:44 np0005536586 python3[7004]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:09:44 np0005536586 sudo[7002]: pam_unix(sudo:session): session closed for user root
Nov 26 12:09:52 np0005536586 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 26 12:10:07 np0005536586 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Nov 26 12:10:07 np0005536586 kernel: pci 0000:07:00.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 26 12:10:07 np0005536586 kernel: pci 0000:07:00.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 26 12:10:07 np0005536586 kernel: pci 0000:07:00.0: ROM [mem 0x00000000-0x0003ffff pref]
Nov 26 12:10:07 np0005536586 kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]: assigned
Nov 26 12:10:07 np0005536586 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]: assigned
Nov 26 12:10:07 np0005536586 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]: assigned
Nov 26 12:10:07 np0005536586 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002)
Nov 26 12:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5047] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 26 12:10:07 np0005536586 systemd-udevd[7007]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 12:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5303] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 12:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5320] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 26 12:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5322] device (eth1): carrier: link connected
Nov 26 12:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5324] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 26 12:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5328] policy: auto-activating connection 'Wired connection 1' (09541ed1-27f0-3dab-920e-bf33aaba73ff)
Nov 26 12:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5331] device (eth1): Activation: starting connection 'Wired connection 1' (09541ed1-27f0-3dab-920e-bf33aaba73ff)
Nov 26 12:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5331] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5333] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5336] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:10:07 np0005536586 NetworkManager[812]: <info>  [1764159007.5339] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 26 12:10:07 np0005536586 python3[7034]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e08-49e2-5cb0-f397-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:10:17 np0005536586 sudo[7112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfbxvdktfbyvapnllcssronlrvdedinl ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 26 12:10:17 np0005536586 sudo[7112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:10:17 np0005536586 python3[7114]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:10:17 np0005536586 sudo[7112]: pam_unix(sudo:session): session closed for user root
Nov 26 12:10:17 np0005536586 sudo[7185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kunrtdmgkrfxkgeockzvbwcbjsvvxcqh ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 26 12:10:17 np0005536586 sudo[7185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:10:17 np0005536586 python3[7187]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764159017.4439554-111-114963850421969/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=cafd78e264adfbd2a32b952d1e03afef2f90c19f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:10:17 np0005536586 sudo[7185]: pam_unix(sudo:session): session closed for user root
Nov 26 12:10:18 np0005536586 sudo[7235]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohbzkednovqfutvmtnbktttycmbbwoup ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 26 12:10:18 np0005536586 sudo[7235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:10:18 np0005536586 python3[7237]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:10:18 np0005536586 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 26 12:10:18 np0005536586 systemd[1]: Stopped Network Manager Wait Online.
Nov 26 12:10:18 np0005536586 systemd[1]: Stopping Network Manager Wait Online...
Nov 26 12:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5270] caught SIGTERM, shutting down normally.
Nov 26 12:10:18 np0005536586 systemd[1]: Stopping Network Manager...
Nov 26 12:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5275] dhcp4 (eth0): canceled DHCP transaction
Nov 26 12:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5275] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 12:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5275] dhcp4 (eth0): state changed no lease
Nov 26 12:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5276] dhcp6 (eth0): canceled DHCP transaction
Nov 26 12:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5276] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 12:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5276] dhcp6 (eth0): state changed no lease
Nov 26 12:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5278] manager: NetworkManager state is now CONNECTING
Nov 26 12:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5427] dhcp4 (eth1): canceled DHCP transaction
Nov 26 12:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5428] dhcp4 (eth1): state changed no lease
Nov 26 12:10:18 np0005536586 NetworkManager[812]: <info>  [1764159018.5445] exiting (success)
Nov 26 12:10:18 np0005536586 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 12:10:18 np0005536586 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 12:10:18 np0005536586 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 26 12:10:18 np0005536586 systemd[1]: Stopped Network Manager.
Nov 26 12:10:18 np0005536586 systemd[1]: Starting Network Manager...
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.5919] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:031c7117-1661-4641-8ff4-d1885bc6a83e)
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.5920] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.5958] manager[0x55d338226090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 26 12:10:18 np0005536586 systemd[1]: Starting Hostname Service...
Nov 26 12:10:18 np0005536586 systemd[1]: Started Hostname Service.
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6541] hostname: hostname: using hostnamed
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6542] hostname: static hostname changed from (none) to "np0005536586"
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6545] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6548] manager[0x55d338226090]: rfkill: Wi-Fi hardware radio set enabled
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6549] manager[0x55d338226090]: rfkill: WWAN hardware radio set enabled
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6567] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6568] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6569] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6570] manager: Networking is enabled by state file
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6573] settings: Loaded settings plugin: keyfile (internal)
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6576] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6593] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6599] dhcp: init: Using DHCP client 'internal'
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6601] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6604] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6608] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6613] device (lo): Activation: starting connection 'lo' (14d47366-79b4-47b4-8c24-e57561e2dedc)
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6618] device (eth0): carrier: link connected
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6622] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6625] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6626] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6630] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6636] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6641] device (eth1): carrier: link connected
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6644] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6649] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (09541ed1-27f0-3dab-920e-bf33aaba73ff) (indicated)
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6650] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6653] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6658] device (eth1): Activation: starting connection 'Wired connection 1' (09541ed1-27f0-3dab-920e-bf33aaba73ff)
Nov 26 12:10:18 np0005536586 systemd[1]: Started Network Manager.
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6662] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6666] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6668] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6669] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6670] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6672] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6674] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6676] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6677] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6683] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6685] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6688] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6690] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6699] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6703] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6714] dhcp4 (eth0): state changed new lease, address=192.168.26.109
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6716] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6721] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 26 12:10:18 np0005536586 systemd[1]: Starting Network Manager Wait Online...
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6746] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 26 12:10:18 np0005536586 NetworkManager[7252]: <info>  [1764159018.6751] device (lo): Activation: successful, device activated.
Nov 26 12:10:18 np0005536586 sudo[7235]: pam_unix(sudo:session): session closed for user root
Nov 26 12:10:18 np0005536586 python3[7309]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e08-49e2-5cb0-f397-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:10:19 np0005536586 NetworkManager[7252]: <info>  [1764159019.7568] dhcp6 (eth0): state changed new lease, address=2001:db8::f0
Nov 26 12:10:19 np0005536586 NetworkManager[7252]: <info>  [1764159019.7576] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 12:10:19 np0005536586 NetworkManager[7252]: <info>  [1764159019.7605] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 12:10:19 np0005536586 NetworkManager[7252]: <info>  [1764159019.7606] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 12:10:19 np0005536586 NetworkManager[7252]: <info>  [1764159019.7608] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 12:10:19 np0005536586 NetworkManager[7252]: <info>  [1764159019.7610] device (eth0): Activation: successful, device activated.
Nov 26 12:10:19 np0005536586 NetworkManager[7252]: <info>  [1764159019.7614] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 26 12:10:29 np0005536586 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 12:10:48 np0005536586 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 12:10:58 np0005536586 sudo[7408]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcfcazptvfcebjugensedgqaqzanuiik ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 26 12:10:58 np0005536586 sudo[7408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:10:58 np0005536586 python3[7410]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:10:58 np0005536586 sudo[7408]: pam_unix(sudo:session): session closed for user root
Nov 26 12:10:58 np0005536586 sudo[7481]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwmycquflvgjboeryqwhvcfdoxsxkwvd ; OS_CLOUD=ibm-bm4-nodepool /usr/bin/python3'
Nov 26 12:10:58 np0005536586 sudo[7481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:10:58 np0005536586 python3[7483]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764159058.4727163-273-80592289281797/source _original_basename=tmpxbznffdp follow=False checksum=421723b73c71618e6142a2656fd71173f072c227 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:10:58 np0005536586 sudo[7481]: pam_unix(sudo:session): session closed for user root
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4000] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 12:11:04 np0005536586 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 12:11:04 np0005536586 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4254] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4256] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4261] device (eth1): Activation: successful, device activated.
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4265] manager: startup complete
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4266] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <warn>  [1764159064.4270] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4275] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 26 12:11:04 np0005536586 systemd[1]: Finished Network Manager Wait Online.
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4338] dhcp4 (eth1): canceled DHCP transaction
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4338] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4338] dhcp4 (eth1): state changed no lease
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4347] policy: auto-activating connection 'ci-private-network' (7797382d-d835-51bb-84eb-feed5516994b)
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4350] device (eth1): Activation: starting connection 'ci-private-network' (7797382d-d835-51bb-84eb-feed5516994b)
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4351] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4352] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4356] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4363] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4381] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4382] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 12:11:04 np0005536586 NetworkManager[7252]: <info>  [1764159064.4386] device (eth1): Activation: successful, device activated.
Nov 26 12:11:04 np0005536586 systemd[4373]: Starting Mark boot as successful...
Nov 26 12:11:04 np0005536586 systemd[4373]: Finished Mark boot as successful.
Nov 26 12:11:14 np0005536586 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 12:11:59 np0005536586 sshd-session[4382]: Received disconnect from 192.168.26.12 port 51796:11: disconnected by user
Nov 26 12:11:59 np0005536586 sshd-session[4382]: Disconnected from user zuul 192.168.26.12 port 51796
Nov 26 12:11:59 np0005536586 sshd-session[4369]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:11:59 np0005536586 systemd-logind[777]: Session 1 logged out. Waiting for processes to exit.
Nov 26 12:14:04 np0005536586 systemd[4373]: Created slice User Background Tasks Slice.
Nov 26 12:14:04 np0005536586 systemd[4373]: Starting Cleanup of User's Temporary Files and Directories...
Nov 26 12:14:04 np0005536586 systemd[4373]: Finished Cleanup of User's Temporary Files and Directories.
Nov 26 12:15:02 np0005536586 sshd-session[7536]: Accepted publickey for zuul from 192.168.26.12 port 40460 ssh2: RSA SHA256:uSHoHww2H0x1DJ3EZPnNe4LJTY0mkFHKbJRE/2eWBow
Nov 26 12:15:02 np0005536586 systemd-logind[777]: New session 3 of user zuul.
Nov 26 12:15:02 np0005536586 systemd[1]: Started Session 3 of User zuul.
Nov 26 12:15:02 np0005536586 sshd-session[7536]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:15:02 np0005536586 sudo[7563]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srokclqnwdcbaoqzabvujdjdqgxahyhd ; /usr/bin/python3'
Nov 26 12:15:02 np0005536586 sudo[7563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:15:02 np0005536586 python3[7565]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                             _uses_shell=True zuul_log_id=fa163e08-49e2-c138-c2a2-000000001cc2-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:15:02 np0005536586 sudo[7563]: pam_unix(sudo:session): session closed for user root
Nov 26 12:15:02 np0005536586 sudo[7592]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcxnevqvuxflaznlocjrfwgjugycurfa ; /usr/bin/python3'
Nov 26 12:15:02 np0005536586 sudo[7592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:15:02 np0005536586 python3[7594]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:15:02 np0005536586 sudo[7592]: pam_unix(sudo:session): session closed for user root
Nov 26 12:15:02 np0005536586 sudo[7618]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnmvryhvieyatuxzacwpejnnfvibpivb ; /usr/bin/python3'
Nov 26 12:15:02 np0005536586 sudo[7618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:15:02 np0005536586 python3[7620]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:15:02 np0005536586 sudo[7618]: pam_unix(sudo:session): session closed for user root
Nov 26 12:15:02 np0005536586 sudo[7644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cckbdwweowfqplofxzgxersvvyzgqtqk ; /usr/bin/python3'
Nov 26 12:15:02 np0005536586 sudo[7644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:15:03 np0005536586 python3[7646]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:15:03 np0005536586 sudo[7644]: pam_unix(sudo:session): session closed for user root
Nov 26 12:15:03 np0005536586 sudo[7670]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohudekmheqwmdauugykkukjnkuifmkgx ; /usr/bin/python3'
Nov 26 12:15:03 np0005536586 sudo[7670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:15:03 np0005536586 python3[7672]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:15:03 np0005536586 sudo[7670]: pam_unix(sudo:session): session closed for user root
Nov 26 12:15:03 np0005536586 sudo[7696]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlgjytauhjewngtxtzqsiauzvbagmthg ; /usr/bin/python3'
Nov 26 12:15:03 np0005536586 sudo[7696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:15:03 np0005536586 python3[7698]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:15:03 np0005536586 sudo[7696]: pam_unix(sudo:session): session closed for user root
Nov 26 12:15:04 np0005536586 sudo[7774]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbhdglussgjzeyouirksyzlkeerxlfmm ; /usr/bin/python3'
Nov 26 12:15:04 np0005536586 sudo[7774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:15:04 np0005536586 python3[7776]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:15:04 np0005536586 sudo[7774]: pam_unix(sudo:session): session closed for user root
Nov 26 12:15:04 np0005536586 sudo[7847]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itxsxslrwqmhihggitglptqwmswypeda ; /usr/bin/python3'
Nov 26 12:15:04 np0005536586 sudo[7847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:15:04 np0005536586 python3[7849]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764159304.0124166-464-228299459114709/source _original_basename=tmpm9ojmfgr follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:15:04 np0005536586 sudo[7847]: pam_unix(sudo:session): session closed for user root
Nov 26 12:15:05 np0005536586 sudo[7897]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkefntbphhxwgfdacgffucprkosdpmsj ; /usr/bin/python3'
Nov 26 12:15:05 np0005536586 sudo[7897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:15:05 np0005536586 python3[7899]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 12:15:05 np0005536586 systemd[1]: Reloading.
Nov 26 12:15:05 np0005536586 systemd-rc-local-generator[7918]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:15:05 np0005536586 sudo[7897]: pam_unix(sudo:session): session closed for user root
Nov 26 12:15:06 np0005536586 sudo[7953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjjnipjychxemqomjkoqehhvexbkbusr ; /usr/bin/python3'
Nov 26 12:15:06 np0005536586 sudo[7953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:15:06 np0005536586 python3[7955]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 26 12:15:06 np0005536586 sudo[7953]: pam_unix(sudo:session): session closed for user root
Nov 26 12:15:06 np0005536586 sudo[7979]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmvntiuqgmtrkkyhdezhkkrlxzzlzsnl ; /usr/bin/python3'
Nov 26 12:15:06 np0005536586 sudo[7979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:15:06 np0005536586 python3[7981]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:15:06 np0005536586 sudo[7979]: pam_unix(sudo:session): session closed for user root
Nov 26 12:15:06 np0005536586 sudo[8007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trbhhjiyqctmlhsiysmmucbljvijpuep ; /usr/bin/python3'
Nov 26 12:15:06 np0005536586 sudo[8007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:15:06 np0005536586 python3[8009]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:15:07 np0005536586 sudo[8007]: pam_unix(sudo:session): session closed for user root
Nov 26 12:15:07 np0005536586 sudo[8035]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqogptqqcizlxxzriilnlfjutuehqsnm ; /usr/bin/python3'
Nov 26 12:15:07 np0005536586 sudo[8035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:15:07 np0005536586 python3[8037]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:15:07 np0005536586 sudo[8035]: pam_unix(sudo:session): session closed for user root
Nov 26 12:15:07 np0005536586 sudo[8063]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drclqgpeumhqjlrdjjghyryzsystjhdo ; /usr/bin/python3'
Nov 26 12:15:07 np0005536586 sudo[8063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:15:07 np0005536586 python3[8065]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:15:07 np0005536586 sudo[8063]: pam_unix(sudo:session): session closed for user root
Nov 26 12:15:07 np0005536586 python3[8092]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                             _uses_shell=True zuul_log_id=fa163e08-49e2-c138-c2a2-000000001cc9-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:15:08 np0005536586 python3[8122]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 12:15:10 np0005536586 sshd-session[7539]: Connection closed by 192.168.26.12 port 40460
Nov 26 12:15:10 np0005536586 sshd-session[7536]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:15:10 np0005536586 systemd[1]: session-3.scope: Deactivated successfully.
Nov 26 12:15:10 np0005536586 systemd[1]: session-3.scope: Consumed 2.914s CPU time.
Nov 26 12:15:10 np0005536586 systemd-logind[777]: Session 3 logged out. Waiting for processes to exit.
Nov 26 12:15:10 np0005536586 systemd-logind[777]: Removed session 3.
Nov 26 12:15:12 np0005536586 sshd-session[8128]: Accepted publickey for zuul from 192.168.26.12 port 51730 ssh2: RSA SHA256:uSHoHww2H0x1DJ3EZPnNe4LJTY0mkFHKbJRE/2eWBow
Nov 26 12:15:12 np0005536586 systemd-logind[777]: New session 4 of user zuul.
Nov 26 12:15:12 np0005536586 systemd[1]: Started Session 4 of User zuul.
Nov 26 12:15:12 np0005536586 sshd-session[8128]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:15:12 np0005536586 sudo[8155]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jatzrxtvzvnjrnndgbvyaslvowosiwgw ; /usr/bin/python3'
Nov 26 12:15:12 np0005536586 sudo[8155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:15:12 np0005536586 python3[8157]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 12:15:30 np0005536586 kernel: SELinux:  Converting 386 SID table entries...
Nov 26 12:15:30 np0005536586 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 12:15:30 np0005536586 kernel: SELinux:  policy capability open_perms=1
Nov 26 12:15:30 np0005536586 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 12:15:30 np0005536586 kernel: SELinux:  policy capability always_check_network=0
Nov 26 12:15:30 np0005536586 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 12:15:30 np0005536586 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 12:15:30 np0005536586 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 12:15:37 np0005536586 kernel: SELinux:  Converting 386 SID table entries...
Nov 26 12:15:37 np0005536586 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 12:15:37 np0005536586 kernel: SELinux:  policy capability open_perms=1
Nov 26 12:15:37 np0005536586 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 12:15:37 np0005536586 kernel: SELinux:  policy capability always_check_network=0
Nov 26 12:15:37 np0005536586 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 12:15:37 np0005536586 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 12:15:37 np0005536586 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 12:15:43 np0005536586 kernel: SELinux:  Converting 386 SID table entries...
Nov 26 12:15:44 np0005536586 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 12:15:44 np0005536586 kernel: SELinux:  policy capability open_perms=1
Nov 26 12:15:44 np0005536586 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 12:15:44 np0005536586 kernel: SELinux:  policy capability always_check_network=0
Nov 26 12:15:44 np0005536586 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 12:15:44 np0005536586 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 12:15:44 np0005536586 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 12:15:44 np0005536586 setsebool[8226]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 26 12:15:44 np0005536586 setsebool[8226]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 26 12:15:53 np0005536586 kernel: SELinux:  Converting 389 SID table entries...
Nov 26 12:15:53 np0005536586 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 12:15:53 np0005536586 kernel: SELinux:  policy capability open_perms=1
Nov 26 12:15:53 np0005536586 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 12:15:53 np0005536586 kernel: SELinux:  policy capability always_check_network=0
Nov 26 12:15:53 np0005536586 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 12:15:53 np0005536586 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 12:15:53 np0005536586 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 12:16:05 np0005536586 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 26 12:16:05 np0005536586 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 12:16:05 np0005536586 systemd[1]: Starting man-db-cache-update.service...
Nov 26 12:16:05 np0005536586 systemd[1]: Reloading.
Nov 26 12:16:05 np0005536586 systemd-rc-local-generator[8975]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:16:05 np0005536586 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 12:16:06 np0005536586 sudo[8155]: pam_unix(sudo:session): session closed for user root
Nov 26 12:16:08 np0005536586 python3[13727]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                              _uses_shell=True zuul_log_id=fa163e08-49e2-c994-0e10-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:16:09 np0005536586 kernel: evm: overlay not supported
Nov 26 12:16:09 np0005536586 systemd[4373]: Starting D-Bus User Message Bus...
Nov 26 12:16:09 np0005536586 dbus-broker-launch[14046]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 26 12:16:09 np0005536586 dbus-broker-launch[14046]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 26 12:16:09 np0005536586 systemd[4373]: Started D-Bus User Message Bus.
Nov 26 12:16:09 np0005536586 dbus-broker-lau[14046]: Ready
Nov 26 12:16:09 np0005536586 systemd[4373]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 26 12:16:09 np0005536586 systemd[4373]: Created slice Slice /user.
Nov 26 12:16:09 np0005536586 systemd[4373]: podman-14027.scope: unit configures an IP firewall, but not running as root.
Nov 26 12:16:09 np0005536586 systemd[4373]: (This warning is only shown for the first unit using IP firewalling.)
Nov 26 12:16:09 np0005536586 systemd[4373]: Started podman-14027.scope.
Nov 26 12:16:09 np0005536586 systemd[4373]: Started podman-pause-b862250b.scope.
Nov 26 12:16:10 np0005536586 sudo[14830]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaewmfnxtyfgllpzucdavnnslilozazr ; /usr/bin/python3'
Nov 26 12:16:10 np0005536586 sudo[14830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:16:10 np0005536586 python3[14847]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                             location = "38.102.83.98:5001"
                                             insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                             location = "38.102.83.98:5001"
                                             insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:16:10 np0005536586 python3[14847]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 26 12:16:10 np0005536586 sudo[14830]: pam_unix(sudo:session): session closed for user root
Nov 26 12:16:10 np0005536586 sshd-session[8131]: Connection closed by 192.168.26.12 port 51730
Nov 26 12:16:10 np0005536586 sshd-session[8128]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:16:10 np0005536586 systemd[1]: session-4.scope: Deactivated successfully.
Nov 26 12:16:10 np0005536586 systemd[1]: session-4.scope: Consumed 44.117s CPU time.
Nov 26 12:16:10 np0005536586 systemd-logind[777]: Session 4 logged out. Waiting for processes to exit.
Nov 26 12:16:10 np0005536586 systemd-logind[777]: Removed session 4.
Nov 26 12:16:29 np0005536586 sshd-session[28466]: Unable to negotiate with 192.168.26.112 port 42702: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 26 12:16:29 np0005536586 sshd-session[28467]: Connection closed by 192.168.26.112 port 42660 [preauth]
Nov 26 12:16:29 np0005536586 sshd-session[28469]: Connection closed by 192.168.26.112 port 42666 [preauth]
Nov 26 12:16:29 np0005536586 sshd-session[28471]: Unable to negotiate with 192.168.26.112 port 42674: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 26 12:16:29 np0005536586 sshd-session[28472]: Unable to negotiate with 192.168.26.112 port 42690: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 26 12:16:31 np0005536586 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 12:16:31 np0005536586 systemd[1]: Finished man-db-cache-update.service.
Nov 26 12:16:31 np0005536586 systemd[1]: man-db-cache-update.service: Consumed 31.653s CPU time.
Nov 26 12:16:31 np0005536586 systemd[1]: run-r20419a130aa9457785c77a38cdb18796.service: Deactivated successfully.
Nov 26 12:16:38 np0005536586 sshd-session[29699]: Accepted publickey for zuul from 192.168.26.12 port 39736 ssh2: RSA SHA256:uSHoHww2H0x1DJ3EZPnNe4LJTY0mkFHKbJRE/2eWBow
Nov 26 12:16:38 np0005536586 systemd-logind[777]: New session 5 of user zuul.
Nov 26 12:16:38 np0005536586 systemd[1]: Started Session 5 of User zuul.
Nov 26 12:16:38 np0005536586 sshd-session[29699]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:16:38 np0005536586 python3[29726]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMtucC4FnQax+Pf8Gg4D4fwS2XgcMuHy3SVvy9tgSF3TJREVyHTUZwq0O8++3exJwNg0p9V8ej/sUTptFsOBBK4= zuul@np0005536585
                                              manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:16:39 np0005536586 sudo[29750]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pphixpyrlekoczikfxdvwcxuhalytiyy ; /usr/bin/python3'
Nov 26 12:16:39 np0005536586 sudo[29750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:16:39 np0005536586 python3[29752]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMtucC4FnQax+Pf8Gg4D4fwS2XgcMuHy3SVvy9tgSF3TJREVyHTUZwq0O8++3exJwNg0p9V8ej/sUTptFsOBBK4= zuul@np0005536585
                                              manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:16:39 np0005536586 sudo[29750]: pam_unix(sudo:session): session closed for user root
Nov 26 12:16:39 np0005536586 sudo[29776]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyblqntukriviasohpszfgnlhhfoonhu ; /usr/bin/python3'
Nov 26 12:16:39 np0005536586 sudo[29776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:16:39 np0005536586 python3[29778]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005536586 update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 26 12:16:39 np0005536586 useradd[29780]: new group: name=cloud-admin, GID=1002
Nov 26 12:16:39 np0005536586 useradd[29780]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 26 12:16:39 np0005536586 sudo[29776]: pam_unix(sudo:session): session closed for user root
Nov 26 12:16:39 np0005536586 sudo[29810]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsqtrtetbfjgbomzynkniheajhhehcri ; /usr/bin/python3'
Nov 26 12:16:39 np0005536586 sudo[29810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:16:40 np0005536586 python3[29812]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMtucC4FnQax+Pf8Gg4D4fwS2XgcMuHy3SVvy9tgSF3TJREVyHTUZwq0O8++3exJwNg0p9V8ej/sUTptFsOBBK4= zuul@np0005536585
                                              manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 12:16:40 np0005536586 sudo[29810]: pam_unix(sudo:session): session closed for user root
Nov 26 12:16:40 np0005536586 sudo[29888]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwickbpbjltmyvulohsgibwhplevacmz ; /usr/bin/python3'
Nov 26 12:16:40 np0005536586 sudo[29888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:16:40 np0005536586 python3[29890]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:16:40 np0005536586 sudo[29888]: pam_unix(sudo:session): session closed for user root
Nov 26 12:16:40 np0005536586 sudo[29961]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpqkgtpedcoslirfvkagzwgggxvmibun ; /usr/bin/python3'
Nov 26 12:16:40 np0005536586 sudo[29961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:16:40 np0005536586 python3[29963]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764159400.2022147-137-66963313755005/source _original_basename=tmpsqwbgdw3 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:16:40 np0005536586 sudo[29961]: pam_unix(sudo:session): session closed for user root
Nov 26 12:16:41 np0005536586 sudo[30011]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wadkbnhvnktsijmzlvynttibchlggfmo ; /usr/bin/python3'
Nov 26 12:16:41 np0005536586 sudo[30011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:16:41 np0005536586 python3[30013]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 26 12:16:41 np0005536586 systemd[1]: Starting Hostname Service...
Nov 26 12:16:41 np0005536586 systemd[1]: Started Hostname Service.
Nov 26 12:16:41 np0005536586 systemd-hostnamed[30017]: Changed pretty hostname to 'compute-0'
Nov 26 12:16:41 compute-0 systemd-hostnamed[30017]: Hostname set to <compute-0> (static)
Nov 26 12:16:41 compute-0 NetworkManager[7252]: <info>  [1764159401.4347] hostname: static hostname changed from "np0005536586" to "compute-0"
Nov 26 12:16:41 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 12:16:41 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 12:16:41 compute-0 sudo[30011]: pam_unix(sudo:session): session closed for user root
Nov 26 12:16:41 compute-0 sshd-session[29702]: Connection closed by 192.168.26.12 port 39736
Nov 26 12:16:41 compute-0 sshd-session[29699]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:16:41 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Nov 26 12:16:41 compute-0 systemd[1]: session-5.scope: Consumed 1.676s CPU time.
Nov 26 12:16:41 compute-0 systemd-logind[777]: Session 5 logged out. Waiting for processes to exit.
Nov 26 12:16:41 compute-0 systemd-logind[777]: Removed session 5.
Nov 26 12:16:51 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 12:17:11 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 12:20:26 compute-0 sshd-session[30034]: Accepted publickey for zuul from 192.168.26.112 port 44164 ssh2: RSA SHA256:uSHoHww2H0x1DJ3EZPnNe4LJTY0mkFHKbJRE/2eWBow
Nov 26 12:20:26 compute-0 systemd-logind[777]: New session 6 of user zuul.
Nov 26 12:20:26 compute-0 systemd[1]: Started Session 6 of User zuul.
Nov 26 12:20:26 compute-0 sshd-session[30034]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:20:26 compute-0 python3[30110]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:20:27 compute-0 sudo[30220]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lurvcgyybulibjwmxhyvirinvhhpihte ; /usr/bin/python3'
Nov 26 12:20:27 compute-0 sudo[30220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:20:28 compute-0 python3[30222]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:20:28 compute-0 sudo[30220]: pam_unix(sudo:session): session closed for user root
Nov 26 12:20:28 compute-0 sudo[30293]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmcqqvfqqtyscnebvgtsayhkkzkdftei ; /usr/bin/python3'
Nov 26 12:20:28 compute-0 sudo[30293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:20:28 compute-0 python3[30295]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764159627.8925967-34009-59046118125108/source mode=0755 _original_basename=delorean.repo follow=False checksum=cdee622b4b81aba8f448eb3a0d6bf38022474867 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:20:28 compute-0 sudo[30293]: pam_unix(sudo:session): session closed for user root
Nov 26 12:20:28 compute-0 sudo[30319]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tleylsamyzslucefjvgzppdfovmqavto ; /usr/bin/python3'
Nov 26 12:20:28 compute-0 sudo[30319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:20:28 compute-0 python3[30321]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:20:28 compute-0 sudo[30319]: pam_unix(sudo:session): session closed for user root
Nov 26 12:20:28 compute-0 sudo[30392]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtatmndydpjtllufejodjdqhwvqncknj ; /usr/bin/python3'
Nov 26 12:20:28 compute-0 sudo[30392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:20:28 compute-0 python3[30394]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764159627.8925967-34009-59046118125108/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=717d1fa230cffa8c08764d71bd0b4a50d3a90cae backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:20:28 compute-0 sudo[30392]: pam_unix(sudo:session): session closed for user root
Nov 26 12:20:28 compute-0 sudo[30418]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqsubnlbttvlieutidspaynknixjphvo ; /usr/bin/python3'
Nov 26 12:20:28 compute-0 sudo[30418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:20:28 compute-0 python3[30420]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:20:28 compute-0 sudo[30418]: pam_unix(sudo:session): session closed for user root
Nov 26 12:20:29 compute-0 sudo[30491]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwsissbgmblrxxiaqohtjkuscjdfjhme ; /usr/bin/python3'
Nov 26 12:20:29 compute-0 sudo[30491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:20:29 compute-0 python3[30493]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764159627.8925967-34009-59046118125108/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=8163d09913b97597f86e38eb45c3003e91da783e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:20:29 compute-0 sudo[30491]: pam_unix(sudo:session): session closed for user root
Nov 26 12:20:29 compute-0 sudo[30517]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhnlosnzfwkdkkwltsqcqrtlmrbfijkt ; /usr/bin/python3'
Nov 26 12:20:29 compute-0 sudo[30517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:20:29 compute-0 python3[30519]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:20:29 compute-0 sudo[30517]: pam_unix(sudo:session): session closed for user root
Nov 26 12:20:29 compute-0 sudo[30590]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgfxpiuuzuannozaxznclcbdzoyiipuf ; /usr/bin/python3'
Nov 26 12:20:29 compute-0 sudo[30590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:20:29 compute-0 python3[30592]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764159627.8925967-34009-59046118125108/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=d108d0750ad5b288ccc41bc6534ea307cc51e987 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:20:29 compute-0 sudo[30590]: pam_unix(sudo:session): session closed for user root
Nov 26 12:20:29 compute-0 sudo[30616]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjwtpktlqqtupnoneivykcyyllyxqyxd ; /usr/bin/python3'
Nov 26 12:20:29 compute-0 sudo[30616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:20:29 compute-0 python3[30618]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:20:29 compute-0 sudo[30616]: pam_unix(sudo:session): session closed for user root
Nov 26 12:20:29 compute-0 sudo[30689]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsrgiydkayhbpshnpkcfyefcushwweiy ; /usr/bin/python3'
Nov 26 12:20:29 compute-0 sudo[30689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:20:30 compute-0 python3[30691]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764159627.8925967-34009-59046118125108/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=20c3917c672c059a872cf09a437f61890d2f89fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:20:30 compute-0 sudo[30689]: pam_unix(sudo:session): session closed for user root
Nov 26 12:20:30 compute-0 sudo[30715]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwqdkrqzpghdkceumklvszoowrexusyq ; /usr/bin/python3'
Nov 26 12:20:30 compute-0 sudo[30715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:20:30 compute-0 python3[30717]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:20:30 compute-0 sudo[30715]: pam_unix(sudo:session): session closed for user root
Nov 26 12:20:30 compute-0 sudo[30788]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlfalblataixllkgvvvtqgcagmiffukh ; /usr/bin/python3'
Nov 26 12:20:30 compute-0 sudo[30788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:20:30 compute-0 python3[30790]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764159627.8925967-34009-59046118125108/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=4d14f168e8a0e6930d905faffbcdf4fedd6664d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:20:30 compute-0 sudo[30788]: pam_unix(sudo:session): session closed for user root
Nov 26 12:20:30 compute-0 sudo[30814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlkdjbrbhxgcukeekxyiilxjerwjhpbw ; /usr/bin/python3'
Nov 26 12:20:30 compute-0 sudo[30814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:20:30 compute-0 python3[30816]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:20:30 compute-0 sudo[30814]: pam_unix(sudo:session): session closed for user root
Nov 26 12:20:30 compute-0 sudo[30887]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuhrqnrnvvxzeofzuficxhgtgsgylonw ; /usr/bin/python3'
Nov 26 12:20:30 compute-0 sudo[30887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:20:30 compute-0 python3[30889]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764159627.8925967-34009-59046118125108/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:20:30 compute-0 sudo[30887]: pam_unix(sudo:session): session closed for user root
Nov 26 12:20:32 compute-0 sshd-session[30914]: Connection closed by 192.168.122.11 port 47314 [preauth]
Nov 26 12:20:32 compute-0 sshd-session[30915]: Connection closed by 192.168.122.11 port 47322 [preauth]
Nov 26 12:20:32 compute-0 sshd-session[30916]: Unable to negotiate with 192.168.122.11 port 47332: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 26 12:20:32 compute-0 sshd-session[30917]: Unable to negotiate with 192.168.122.11 port 47334: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 26 12:20:32 compute-0 sshd-session[30918]: Unable to negotiate with 192.168.122.11 port 47348: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 26 12:20:42 compute-0 python3[30947]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:23:04 compute-0 systemd[1]: Starting dnf makecache...
Nov 26 12:23:04 compute-0 dnf[30949]: Failed determining last makecache time.
Nov 26 12:23:05 compute-0 dnf[30949]: delorean-openstack-barbican-42b4c41831408a8e323  83 kB/s |  13 kB     00:00
Nov 26 12:23:05 compute-0 dnf[30949]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 456 kB/s |  65 kB     00:00
Nov 26 12:23:05 compute-0 dnf[30949]: delorean-openstack-cinder-1c00d6490d88e436f26ef 227 kB/s |  32 kB     00:00
Nov 26 12:23:05 compute-0 dnf[30949]: delorean-python-stevedore-c4acc5639fd2329372142 957 kB/s | 131 kB     00:00
Nov 26 12:23:05 compute-0 dnf[30949]: delorean-python-observabilityclient-2f31846d73c 174 kB/s |  25 kB     00:00
Nov 26 12:23:05 compute-0 dnf[30949]: delorean-os-net-config-bbae2ed8a159b0435a473f38 2.5 MB/s | 356 kB     00:00
Nov 26 12:23:06 compute-0 dnf[30949]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 312 kB/s |  42 kB     00:00
Nov 26 12:23:06 compute-0 dnf[30949]: delorean-python-designate-tests-tempest-347fdbc  98 kB/s |  18 kB     00:00
Nov 26 12:23:06 compute-0 dnf[30949]: delorean-openstack-glance-1fd12c29b339f30fe823e 123 kB/s |  18 kB     00:00
Nov 26 12:23:06 compute-0 dnf[30949]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 179 kB/s |  29 kB     00:00
Nov 26 12:23:06 compute-0 dnf[30949]: delorean-openstack-manila-3c01b7181572c95dac462 176 kB/s |  25 kB     00:00
Nov 26 12:23:06 compute-0 dnf[30949]: delorean-python-whitebox-neutron-tests-tempest- 1.0 MB/s | 154 kB     00:00
Nov 26 12:23:07 compute-0 dnf[30949]: delorean-openstack-octavia-ba397f07a7331190208c 169 kB/s |  26 kB     00:00
Nov 26 12:23:07 compute-0 dnf[30949]: delorean-openstack-watcher-c014f81a8647287f6dcc 122 kB/s |  16 kB     00:00
Nov 26 12:23:07 compute-0 dnf[30949]: delorean-python-tcib-1124124ec06aadbac34f0d340b  53 kB/s | 7.4 kB     00:00
Nov 26 12:23:07 compute-0 dnf[30949]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 991 kB/s | 144 kB     00:00
Nov 26 12:23:07 compute-0 dnf[30949]: delorean-openstack-swift-dc98a8463506ac520c469a 103 kB/s |  14 kB     00:00
Nov 26 12:23:07 compute-0 dnf[30949]: delorean-python-tempestconf-8515371b7cceebd4282 397 kB/s |  53 kB     00:00
Nov 26 12:23:08 compute-0 dnf[30949]: delorean-openstack-heat-ui-013accbfd179753bc3f0 711 kB/s |  96 kB     00:00
Nov 26 12:23:09 compute-0 dnf[30949]: CentOS Stream 9 - BaseOS                        5.0 kB/s | 7.3 kB     00:01
Nov 26 12:23:10 compute-0 dnf[30949]: CentOS Stream 9 - AppStream                      15 kB/s | 7.4 kB     00:00
Nov 26 12:23:10 compute-0 dnf[30949]: CentOS Stream 9 - CRB                           8.7 kB/s | 7.2 kB     00:00
Nov 26 12:23:11 compute-0 dnf[30949]: CentOS Stream 9 - Extras packages                19 kB/s | 8.3 kB     00:00
Nov 26 12:23:11 compute-0 dnf[30949]: dlrn-antelope-testing                           7.3 MB/s | 1.1 MB     00:00
Nov 26 12:23:11 compute-0 dnf[30949]: dlrn-antelope-build-deps                        3.2 MB/s | 461 kB     00:00
Nov 26 12:23:12 compute-0 dnf[30949]: centos9-rabbitmq                                2.6 MB/s | 123 kB     00:00
Nov 26 12:23:12 compute-0 dnf[30949]: centos9-storage                                  32 MB/s | 415 kB     00:00
Nov 26 12:23:12 compute-0 dnf[30949]: centos9-opstools                                4.4 MB/s |  51 kB     00:00
Nov 26 12:23:12 compute-0 dnf[30949]: NFV SIG OpenvSwitch                              34 MB/s | 458 kB     00:00
Nov 26 12:23:12 compute-0 dnf[30949]: repo-setup-centos-appstream                     219 MB/s |  25 MB     00:00
Nov 26 12:23:17 compute-0 dnf[30949]: repo-setup-centos-baseos                        199 MB/s | 8.8 MB     00:00
Nov 26 12:23:18 compute-0 dnf[30949]: repo-setup-centos-highavailability               46 MB/s | 744 kB     00:00
Nov 26 12:23:18 compute-0 dnf[30949]: repo-setup-centos-powertools                    199 MB/s | 7.3 MB     00:00
Nov 26 12:23:18 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 26 12:23:18 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 26 12:23:18 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 26 12:23:18 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 26 12:23:22 compute-0 dnf[30949]: Extra Packages for Enterprise Linux 9 - x86_64  7.2 MB/s |  20 MB     00:02
Nov 26 12:23:32 compute-0 dnf[30949]: Metadata cache created.
Nov 26 12:23:32 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 26 12:23:32 compute-0 systemd[1]: Finished dnf makecache.
Nov 26 12:23:32 compute-0 systemd[1]: dnf-makecache.service: Consumed 18.563s CPU time.
Nov 26 12:25:42 compute-0 sshd-session[30037]: Received disconnect from 192.168.26.112 port 44164:11: disconnected by user
Nov 26 12:25:42 compute-0 sshd-session[30037]: Disconnected from user zuul 192.168.26.112 port 44164
Nov 26 12:25:42 compute-0 sshd-session[30034]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:25:42 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 26 12:25:42 compute-0 systemd[1]: session-6.scope: Consumed 3.293s CPU time.
Nov 26 12:25:42 compute-0 systemd-logind[777]: Session 6 logged out. Waiting for processes to exit.
Nov 26 12:25:42 compute-0 systemd-logind[777]: Removed session 6.
Nov 26 12:30:04 compute-0 sshd-session[31055]: Accepted publickey for zuul from 192.168.122.30 port 54596 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:30:04 compute-0 systemd-logind[777]: New session 7 of user zuul.
Nov 26 12:30:04 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 26 12:30:04 compute-0 sshd-session[31055]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:30:04 compute-0 python3.9[31208]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:30:05 compute-0 sudo[31387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enargjhrhnfcdbhtckbapwfkccbgsufk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160205.4247608-32-280772585992163/AnsiballZ_command.py'
Nov 26 12:30:05 compute-0 sudo[31387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:30:05 compute-0 python3.9[31389]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:30:14 compute-0 sudo[31387]: pam_unix(sudo:session): session closed for user root
Nov 26 12:30:14 compute-0 sshd-session[31058]: Connection closed by 192.168.122.30 port 54596
Nov 26 12:30:14 compute-0 sshd-session[31055]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:30:14 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 26 12:30:14 compute-0 systemd[1]: session-7.scope: Consumed 6.253s CPU time.
Nov 26 12:30:14 compute-0 systemd-logind[777]: Session 7 logged out. Waiting for processes to exit.
Nov 26 12:30:14 compute-0 systemd-logind[777]: Removed session 7.
Nov 26 12:30:30 compute-0 sshd-session[31448]: Accepted publickey for zuul from 192.168.122.30 port 48508 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:30:30 compute-0 systemd-logind[777]: New session 8 of user zuul.
Nov 26 12:30:30 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 26 12:30:30 compute-0 sshd-session[31448]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:30:30 compute-0 python3.9[31601]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 26 12:30:31 compute-0 python3.9[31775]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:30:32 compute-0 sudo[31925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlhhgqcasouehotancophuzliohlzqvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160231.7954996-45-136024714830016/AnsiballZ_command.py'
Nov 26 12:30:32 compute-0 sudo[31925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:30:32 compute-0 python3.9[31927]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:30:32 compute-0 sudo[31925]: pam_unix(sudo:session): session closed for user root
Nov 26 12:30:32 compute-0 sudo[32078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpjbdnqsnnpyvukqeibinkptwaxrjvff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160232.529555-57-87144795032559/AnsiballZ_stat.py'
Nov 26 12:30:32 compute-0 sudo[32078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:30:33 compute-0 python3.9[32080]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:30:33 compute-0 sudo[32078]: pam_unix(sudo:session): session closed for user root
Nov 26 12:30:33 compute-0 sudo[32230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqefixsxlfkjhrgrmbbkhcjirlxndfeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160233.1651783-65-70030325692427/AnsiballZ_file.py'
Nov 26 12:30:33 compute-0 sudo[32230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:30:33 compute-0 python3.9[32232]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:30:33 compute-0 sudo[32230]: pam_unix(sudo:session): session closed for user root
Nov 26 12:30:33 compute-0 sudo[32382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfbmkrzoedhqktdcalmuwxfhwayurjwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160233.7326355-73-138959552607537/AnsiballZ_stat.py'
Nov 26 12:30:33 compute-0 sudo[32382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:30:34 compute-0 python3.9[32384]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:30:34 compute-0 sudo[32382]: pam_unix(sudo:session): session closed for user root
Nov 26 12:30:34 compute-0 sudo[32505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkzscfblmcgnhjwcvbddsjjlvzyyhldd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160233.7326355-73-138959552607537/AnsiballZ_copy.py'
Nov 26 12:30:34 compute-0 sudo[32505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:30:34 compute-0 python3.9[32507]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160233.7326355-73-138959552607537/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:30:34 compute-0 sudo[32505]: pam_unix(sudo:session): session closed for user root
Nov 26 12:30:34 compute-0 sudo[32657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egfqvsazanchfevfkxcvwjoufrkvucfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160234.7156258-88-108874417439989/AnsiballZ_setup.py'
Nov 26 12:30:34 compute-0 sudo[32657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:30:35 compute-0 python3.9[32659]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:30:35 compute-0 sudo[32657]: pam_unix(sudo:session): session closed for user root
Nov 26 12:30:35 compute-0 sudo[32813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtlftqojghqovpgldugjuvqdjllyocxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160235.4010715-96-249896138347943/AnsiballZ_file.py'
Nov 26 12:30:35 compute-0 sudo[32813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:30:35 compute-0 python3.9[32815]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:30:35 compute-0 sudo[32813]: pam_unix(sudo:session): session closed for user root
Nov 26 12:30:36 compute-0 sudo[32965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohbiwtfgujldyinhatqxaqvqbybnllyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160235.9075236-105-240093099169924/AnsiballZ_file.py'
Nov 26 12:30:36 compute-0 sudo[32965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:30:36 compute-0 python3.9[32967]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:30:36 compute-0 sudo[32965]: pam_unix(sudo:session): session closed for user root
Nov 26 12:30:36 compute-0 python3.9[33117]: ansible-ansible.builtin.service_facts Invoked
Nov 26 12:30:38 compute-0 python3.9[33370]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:30:39 compute-0 python3.9[33520]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:30:40 compute-0 python3.9[33674]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:30:40 compute-0 sudo[33830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxtfjmjngfjgrgpyllcuqivtwrvxevyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160240.5371273-153-67148772490648/AnsiballZ_setup.py'
Nov 26 12:30:40 compute-0 sudo[33830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:30:40 compute-0 python3.9[33832]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:30:41 compute-0 sudo[33830]: pam_unix(sudo:session): session closed for user root
Nov 26 12:30:41 compute-0 sudo[33914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtnukeyjrnqyihytxnoayhjenjttrkpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160240.5371273-153-67148772490648/AnsiballZ_dnf.py'
Nov 26 12:30:41 compute-0 sudo[33914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:30:41 compute-0 python3.9[33916]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:31:51 compute-0 systemd[1]: Reloading.
Nov 26 12:31:51 compute-0 systemd-rc-local-generator[34111]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:31:51 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 26 12:31:51 compute-0 systemd[1]: Reloading.
Nov 26 12:31:51 compute-0 systemd-rc-local-generator[34151]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:31:51 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 26 12:31:51 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 26 12:31:51 compute-0 systemd[1]: Reloading.
Nov 26 12:31:51 compute-0 systemd-rc-local-generator[34190]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:31:51 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 26 12:31:51 compute-0 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Nov 26 12:31:51 compute-0 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Nov 26 12:31:51 compute-0 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Nov 26 12:32:35 compute-0 kernel: SELinux:  Converting 2719 SID table entries...
Nov 26 12:32:35 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 12:32:35 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 26 12:32:35 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 12:32:35 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 26 12:32:35 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 12:32:35 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 12:32:35 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 12:32:35 compute-0 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 26 12:32:35 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 12:32:35 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 12:32:35 compute-0 systemd[1]: Reloading.
Nov 26 12:32:36 compute-0 systemd-rc-local-generator[34489]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:32:36 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 12:32:36 compute-0 sudo[33914]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:36 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 12:32:36 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 12:32:36 compute-0 systemd[1]: run-raf41206f44d547c2a80928a5b4a86684.service: Deactivated successfully.
Nov 26 12:32:36 compute-0 sudo[35408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akqauvynlriizodhvlgprzboomtfuvya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160356.5141296-165-31832667493745/AnsiballZ_command.py'
Nov 26 12:32:36 compute-0 sudo[35408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:36 compute-0 python3.9[35410]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:32:37 compute-0 sudo[35408]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:38 compute-0 sudo[35689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyhgslwzfhimlqjxapvluldpatgftvxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160357.6331708-173-112105229357512/AnsiballZ_selinux.py'
Nov 26 12:32:38 compute-0 sudo[35689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:38 compute-0 python3.9[35691]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 26 12:32:38 compute-0 sudo[35689]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:38 compute-0 sudo[35841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlibzewgkfhhvievogglieliowgxrqde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160358.5253377-184-144293806729703/AnsiballZ_command.py'
Nov 26 12:32:38 compute-0 sudo[35841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:38 compute-0 python3.9[35843]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 26 12:32:39 compute-0 sudo[35841]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:39 compute-0 sudo[35994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpkkazvuogflgfshwdbcyofawwkdgcoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160359.5256915-192-101301625891325/AnsiballZ_file.py'
Nov 26 12:32:39 compute-0 sudo[35994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:40 compute-0 python3.9[35996]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:32:40 compute-0 sudo[35994]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:41 compute-0 sudo[36146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pidjcsqtfzjzohrwjpoanvxawzdmojga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160360.8047278-200-14771984403566/AnsiballZ_mount.py'
Nov 26 12:32:41 compute-0 sudo[36146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:41 compute-0 python3.9[36148]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 26 12:32:41 compute-0 sudo[36146]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:41 compute-0 sudo[36298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubrahpkfdhdbmbocheckwgvujfjiqvgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160361.8126118-228-89786288890034/AnsiballZ_file.py'
Nov 26 12:32:41 compute-0 sudo[36298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:42 compute-0 python3.9[36300]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:32:42 compute-0 sudo[36298]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:42 compute-0 sudo[36450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhetxtmmfqfpgayzdsfxybkqeqeyubzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160362.2464335-236-200188565198859/AnsiballZ_stat.py'
Nov 26 12:32:42 compute-0 sudo[36450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:42 compute-0 python3.9[36452]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:32:42 compute-0 sudo[36450]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:42 compute-0 sudo[36573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skqrzalhhccpwkbdlomysgninqwmbuzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160362.2464335-236-200188565198859/AnsiballZ_copy.py'
Nov 26 12:32:42 compute-0 sudo[36573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:42 compute-0 python3.9[36575]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160362.2464335-236-200188565198859/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7c9073e58b305b24b8ebef88eac378fe26a8dfa0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:32:42 compute-0 sudo[36573]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:43 compute-0 sudo[36725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynojunwalsebasspnisvxdvvnlkvxlaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160363.2659512-260-42853238032081/AnsiballZ_stat.py'
Nov 26 12:32:43 compute-0 sudo[36725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:43 compute-0 python3.9[36727]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:32:43 compute-0 sudo[36725]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:43 compute-0 sudo[36877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbgebekijyjuabpkapgfrwmudwekmini ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160363.6964977-268-6878022076327/AnsiballZ_command.py'
Nov 26 12:32:43 compute-0 sudo[36877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:44 compute-0 python3.9[36879]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:32:44 compute-0 sudo[36877]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:44 compute-0 sudo[37030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsaeszeoogmnecmdvisifyesrfvaumsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160364.1856446-276-68109125412752/AnsiballZ_file.py'
Nov 26 12:32:44 compute-0 sudo[37030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:44 compute-0 python3.9[37032]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:32:44 compute-0 sudo[37030]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:45 compute-0 sudo[37182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jghmyghijddneezxtfdtmwwcqmgljact ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160364.7758062-287-235076940946051/AnsiballZ_getent.py'
Nov 26 12:32:45 compute-0 sudo[37182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:47 compute-0 python3.9[37184]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 26 12:32:47 compute-0 sudo[37182]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:47 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 12:32:48 compute-0 sudo[37336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylozuoykhjyygrocdsizjzyqoaegxaho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160367.7232885-295-18241880788293/AnsiballZ_group.py'
Nov 26 12:32:48 compute-0 sudo[37336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:48 compute-0 python3.9[37338]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 12:32:48 compute-0 groupadd[37339]: group added to /etc/group: name=qemu, GID=107
Nov 26 12:32:48 compute-0 groupadd[37339]: group added to /etc/gshadow: name=qemu
Nov 26 12:32:48 compute-0 groupadd[37339]: new group: name=qemu, GID=107
Nov 26 12:32:48 compute-0 sudo[37336]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:48 compute-0 sudo[37494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyboqojvpmtmznadagiblqzwhkyqdlcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160368.3095906-303-27184354308647/AnsiballZ_user.py'
Nov 26 12:32:48 compute-0 sudo[37494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:48 compute-0 python3.9[37496]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 12:32:48 compute-0 useradd[37498]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 26 12:32:48 compute-0 sudo[37494]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:49 compute-0 sudo[37654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzxovdhzrwvobblggrthksllttljerjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160368.9416847-311-107983751317647/AnsiballZ_getent.py'
Nov 26 12:32:49 compute-0 sudo[37654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:49 compute-0 python3.9[37656]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 26 12:32:49 compute-0 sudo[37654]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:49 compute-0 sudo[37807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duspptsrxkrlapdxqwvhbxzhugvizkqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160369.3546917-319-252683313394787/AnsiballZ_group.py'
Nov 26 12:32:49 compute-0 sudo[37807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:49 compute-0 python3.9[37809]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 12:32:49 compute-0 groupadd[37810]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 26 12:32:49 compute-0 groupadd[37810]: group added to /etc/gshadow: name=hugetlbfs
Nov 26 12:32:49 compute-0 groupadd[37810]: new group: name=hugetlbfs, GID=42477
Nov 26 12:32:49 compute-0 sudo[37807]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:49 compute-0 sudo[37965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryuenwlpgmngeiwumhaszvvmymftgxxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160369.8087275-328-181487300650357/AnsiballZ_file.py'
Nov 26 12:32:49 compute-0 sudo[37965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:50 compute-0 python3.9[37967]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 26 12:32:50 compute-0 sudo[37965]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:50 compute-0 sudo[38117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twaetpzkkkpuxxaifabdddymvrncvzsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160370.3183708-339-263478781723305/AnsiballZ_dnf.py'
Nov 26 12:32:50 compute-0 sudo[38117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:50 compute-0 python3.9[38119]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:32:51 compute-0 sudo[38117]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:52 compute-0 sudo[38270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgpochfrqsmqybbrdzbpzvfnyypkhgks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160371.9744618-347-194874645325995/AnsiballZ_file.py'
Nov 26 12:32:52 compute-0 sudo[38270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:52 compute-0 python3.9[38272]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:32:52 compute-0 sudo[38270]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:52 compute-0 sudo[38422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxccyfkmlwboxjeictljchkelvxpqxqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160372.4117715-355-227205241924536/AnsiballZ_stat.py'
Nov 26 12:32:52 compute-0 sudo[38422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:52 compute-0 python3.9[38424]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:32:52 compute-0 sudo[38422]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:52 compute-0 sudo[38545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqqjhjqroebhaaezhqiwfdguayelssde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160372.4117715-355-227205241924536/AnsiballZ_copy.py'
Nov 26 12:32:52 compute-0 sudo[38545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:53 compute-0 python3.9[38547]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764160372.4117715-355-227205241924536/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:32:53 compute-0 sudo[38545]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:53 compute-0 sudo[38697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jczazvetqrltzfhasfbkebehcloipjgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160373.2425463-370-40254794670061/AnsiballZ_systemd.py'
Nov 26 12:32:53 compute-0 sudo[38697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:53 compute-0 python3.9[38699]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:32:53 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 26 12:32:53 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 26 12:32:53 compute-0 systemd-modules-load[38703]: Inserted module 'br_netfilter'
Nov 26 12:32:53 compute-0 kernel: Bridge firewalling registered
Nov 26 12:32:53 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 26 12:32:53 compute-0 sudo[38697]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:54 compute-0 sudo[38857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unubiouxymepcnhirnkgjhkwshteipsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160374.0941312-378-139837407102522/AnsiballZ_stat.py'
Nov 26 12:32:54 compute-0 sudo[38857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:54 compute-0 python3.9[38859]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:32:54 compute-0 sudo[38857]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:54 compute-0 sudo[38980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdcqnhbeqfnpuezsgywzqibixtqftyhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160374.0941312-378-139837407102522/AnsiballZ_copy.py'
Nov 26 12:32:54 compute-0 sudo[38980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:54 compute-0 python3.9[38982]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764160374.0941312-378-139837407102522/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:32:54 compute-0 sudo[38980]: pam_unix(sudo:session): session closed for user root
Nov 26 12:32:55 compute-0 sudo[39132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhvfdzovnffcuuxknkdlhtusgbvmhrco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160375.0014222-396-73867291775824/AnsiballZ_dnf.py'
Nov 26 12:32:55 compute-0 sudo[39132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:32:55 compute-0 python3.9[39134]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:33:00 compute-0 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Nov 26 12:33:00 compute-0 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Nov 26 12:33:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 12:33:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 12:33:01 compute-0 systemd[1]: Reloading.
Nov 26 12:33:01 compute-0 systemd-rc-local-generator[39190]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:33:01 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 12:33:01 compute-0 sudo[39132]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:02 compute-0 python3.9[40487]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:33:02 compute-0 python3.9[41576]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 26 12:33:03 compute-0 python3.9[42424]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:33:03 compute-0 sudo[43110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jygadkbahbzybubbszszpqwpzplptbdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160383.2560344-435-103079527298292/AnsiballZ_command.py'
Nov 26 12:33:03 compute-0 sudo[43110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:03 compute-0 python3.9[43129]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:33:03 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 12:33:03 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 12:33:03 compute-0 systemd[1]: man-db-cache-update.service: Consumed 3.338s CPU time.
Nov 26 12:33:03 compute-0 systemd[1]: run-r2f62388b33d54473987a4b284b643bc5.service: Deactivated successfully.
Nov 26 12:33:03 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 26 12:33:04 compute-0 systemd[1]: Starting Authorization Manager...
Nov 26 12:33:04 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 26 12:33:04 compute-0 polkitd[43512]: Started polkitd version 0.117
Nov 26 12:33:04 compute-0 polkitd[43512]: Loading rules from directory /etc/polkit-1/rules.d
Nov 26 12:33:04 compute-0 polkitd[43512]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 26 12:33:04 compute-0 polkitd[43512]: Finished loading, compiling and executing 2 rules
Nov 26 12:33:04 compute-0 systemd[1]: Started Authorization Manager.
Nov 26 12:33:04 compute-0 polkitd[43512]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 26 12:33:04 compute-0 sudo[43110]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:04 compute-0 sudo[43676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enyastztycruzdggabixsdwklffzaukm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160384.2566466-444-192179717734962/AnsiballZ_systemd.py'
Nov 26 12:33:04 compute-0 sudo[43676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:04 compute-0 python3.9[43678]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:33:04 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 26 12:33:04 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 26 12:33:04 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 26 12:33:04 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 26 12:33:04 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 26 12:33:04 compute-0 sudo[43676]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:05 compute-0 python3.9[43840]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 26 12:33:06 compute-0 sudo[43990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlyrxctgyxjishwcbrogoltztghnfkxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160386.7324092-501-270263032638162/AnsiballZ_systemd.py'
Nov 26 12:33:06 compute-0 sudo[43990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:07 compute-0 python3.9[43992]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:33:07 compute-0 systemd[1]: Reloading.
Nov 26 12:33:07 compute-0 systemd-rc-local-generator[44015]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:33:07 compute-0 sudo[43990]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:07 compute-0 sudo[44178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bafotlexznubsdnuxgwpftcqxqgobron ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160387.4311492-501-8915711726808/AnsiballZ_systemd.py'
Nov 26 12:33:07 compute-0 sudo[44178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:07 compute-0 python3.9[44180]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:33:07 compute-0 systemd[1]: Reloading.
Nov 26 12:33:07 compute-0 systemd-rc-local-generator[44202]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:33:08 compute-0 sudo[44178]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:08 compute-0 sudo[44367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwkxzjqogwzuohxvedunjjfnaltxospr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160388.1845357-517-279978390370659/AnsiballZ_command.py'
Nov 26 12:33:08 compute-0 sudo[44367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:08 compute-0 python3.9[44369]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:33:08 compute-0 sudo[44367]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:08 compute-0 sudo[44520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bingrmtrwdqxjchbothvcvphcgtjfsxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160388.612021-525-275735113142780/AnsiballZ_command.py'
Nov 26 12:33:08 compute-0 sudo[44520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:08 compute-0 python3.9[44522]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:33:08 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 26 12:33:08 compute-0 sudo[44520]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:09 compute-0 sudo[44673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unzichbustealtckrzwsvrfkmwzhlgab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160389.0355353-533-151062088054454/AnsiballZ_command.py'
Nov 26 12:33:09 compute-0 sudo[44673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:09 compute-0 python3.9[44675]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:33:10 compute-0 sudo[44673]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:10 compute-0 sudo[44835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyjofilwxraqwwzabhyjfxburykmgyyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160390.5124688-541-183880069969358/AnsiballZ_command.py'
Nov 26 12:33:10 compute-0 sudo[44835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:10 compute-0 python3.9[44837]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:33:10 compute-0 sudo[44835]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:11 compute-0 sudo[44988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvgykppvryiphqibzxscuqeauvvfwtka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160390.937705-549-138284643307622/AnsiballZ_systemd.py'
Nov 26 12:33:11 compute-0 sudo[44988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:11 compute-0 python3.9[44990]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:33:11 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 26 12:33:11 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 26 12:33:11 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 26 12:33:11 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 26 12:33:11 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 26 12:33:11 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 26 12:33:11 compute-0 sudo[44988]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:11 compute-0 sshd-session[31451]: Connection closed by 192.168.122.30 port 48508
Nov 26 12:33:11 compute-0 sshd-session[31448]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:33:11 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 26 12:33:11 compute-0 systemd[1]: session-8.scope: Consumed 1min 38.038s CPU time.
Nov 26 12:33:11 compute-0 systemd-logind[777]: Session 8 logged out. Waiting for processes to exit.
Nov 26 12:33:11 compute-0 systemd-logind[777]: Removed session 8.
Nov 26 12:33:16 compute-0 sshd-session[45021]: Accepted publickey for zuul from 192.168.122.30 port 59206 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:33:16 compute-0 systemd-logind[777]: New session 9 of user zuul.
Nov 26 12:33:16 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 26 12:33:16 compute-0 sshd-session[45021]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:33:17 compute-0 python3.9[45174]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:33:17 compute-0 sudo[45328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omprtizkjusyimxxomzwbijidhdzlzhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160397.6234367-36-153618273947373/AnsiballZ_getent.py'
Nov 26 12:33:17 compute-0 sudo[45328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:18 compute-0 python3.9[45330]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 26 12:33:18 compute-0 sudo[45328]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:18 compute-0 sudo[45481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcwhkwcnqgsqgrgzylwrwyisdkowfccm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160398.1684554-44-148684744035705/AnsiballZ_group.py'
Nov 26 12:33:18 compute-0 sudo[45481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:18 compute-0 python3.9[45483]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 12:33:18 compute-0 groupadd[45484]: group added to /etc/group: name=openvswitch, GID=42476
Nov 26 12:33:18 compute-0 groupadd[45484]: group added to /etc/gshadow: name=openvswitch
Nov 26 12:33:18 compute-0 groupadd[45484]: new group: name=openvswitch, GID=42476
Nov 26 12:33:18 compute-0 sudo[45481]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:19 compute-0 sudo[45639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgitlaeujfpnknbsgjhbzefunibghybn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160398.7342813-52-22096582562208/AnsiballZ_user.py'
Nov 26 12:33:19 compute-0 sudo[45639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:19 compute-0 python3.9[45641]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 12:33:19 compute-0 useradd[45643]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 26 12:33:19 compute-0 useradd[45643]: add 'openvswitch' to group 'hugetlbfs'
Nov 26 12:33:19 compute-0 useradd[45643]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 26 12:33:19 compute-0 sudo[45639]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:19 compute-0 sudo[45799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvvoqhgiswaubisejroxgpurdlqydprx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160399.4134786-62-89315361436648/AnsiballZ_setup.py'
Nov 26 12:33:19 compute-0 sudo[45799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:19 compute-0 python3.9[45801]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:33:20 compute-0 sudo[45799]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:20 compute-0 sudo[45883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtmezyxelqhwhzyvpwaheojhgchsczkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160399.4134786-62-89315361436648/AnsiballZ_dnf.py'
Nov 26 12:33:20 compute-0 sudo[45883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:20 compute-0 python3.9[45885]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 12:33:23 compute-0 irqbalance[772]: Cannot change IRQ 44 affinity: Operation not permitted
Nov 26 12:33:23 compute-0 irqbalance[772]: IRQ 44 affinity is now unmanaged
Nov 26 12:33:26 compute-0 sudo[45883]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:26 compute-0 sudo[46050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbbivxvhymockuuheyesxtxyyvmoseib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160406.5662704-76-64283580511915/AnsiballZ_dnf.py'
Nov 26 12:33:26 compute-0 sudo[46050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:26 compute-0 python3.9[46052]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:33:35 compute-0 kernel: SELinux:  Converting 2731 SID table entries...
Nov 26 12:33:35 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 12:33:35 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 26 12:33:35 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 12:33:35 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 26 12:33:35 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 12:33:35 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 12:33:35 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 12:33:35 compute-0 groupadd[46075]: group added to /etc/group: name=unbound, GID=993
Nov 26 12:33:35 compute-0 groupadd[46075]: group added to /etc/gshadow: name=unbound
Nov 26 12:33:35 compute-0 groupadd[46075]: new group: name=unbound, GID=993
Nov 26 12:33:35 compute-0 useradd[46082]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 26 12:33:35 compute-0 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 26 12:33:35 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 26 12:33:36 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 12:33:36 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 12:33:36 compute-0 systemd[1]: Reloading.
Nov 26 12:33:36 compute-0 systemd-sysv-generator[46576]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:33:36 compute-0 systemd-rc-local-generator[46573]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:33:36 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 12:33:36 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 12:33:36 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 12:33:36 compute-0 systemd[1]: run-r7cb0bb0322dc4327bb4df237b950e2b6.service: Deactivated successfully.
Nov 26 12:33:36 compute-0 sudo[46050]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:37 compute-0 sudo[47147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxzpfqlvoysziusehmminsllgtlvzevl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160417.0219002-84-128204533577409/AnsiballZ_systemd.py'
Nov 26 12:33:37 compute-0 sudo[47147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:37 compute-0 python3.9[47149]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 12:33:37 compute-0 systemd[1]: Reloading.
Nov 26 12:33:37 compute-0 systemd-rc-local-generator[47175]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:33:37 compute-0 systemd-sysv-generator[47178]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:33:37 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 26 12:33:37 compute-0 chown[47191]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 26 12:33:37 compute-0 ovs-ctl[47196]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 26 12:33:37 compute-0 ovs-ctl[47196]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 26 12:33:38 compute-0 ovs-ctl[47196]: Starting ovsdb-server [  OK  ]
Nov 26 12:33:38 compute-0 ovs-vsctl[47245]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 26 12:33:38 compute-0 ovs-vsctl[47265]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"1a132c77-5dda-4b90-923d-26a448f3fef6\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 26 12:33:38 compute-0 ovs-ctl[47196]: Configuring Open vSwitch system IDs [  OK  ]
Nov 26 12:33:38 compute-0 ovs-ctl[47196]: Enabling remote OVSDB managers [  OK  ]
Nov 26 12:33:38 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 26 12:33:38 compute-0 ovs-vsctl[47271]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 26 12:33:38 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 26 12:33:38 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 26 12:33:38 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 26 12:33:38 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 26 12:33:38 compute-0 ovs-ctl[47316]: Inserting openvswitch module [  OK  ]
Nov 26 12:33:38 compute-0 ovs-ctl[47285]: Starting ovs-vswitchd [  OK  ]
Nov 26 12:33:38 compute-0 ovs-ctl[47285]: Enabling remote OVSDB managers [  OK  ]
Nov 26 12:33:38 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 26 12:33:38 compute-0 ovs-vsctl[47334]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 26 12:33:38 compute-0 systemd[1]: Starting Open vSwitch...
Nov 26 12:33:38 compute-0 systemd[1]: Finished Open vSwitch.
Nov 26 12:33:38 compute-0 sudo[47147]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:38 compute-0 python3.9[47485]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:33:39 compute-0 sudo[47635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wynbmqufcsnmzhlzxlfncjknhwbtdlkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160419.127205-102-73636281133570/AnsiballZ_sefcontext.py'
Nov 26 12:33:39 compute-0 sudo[47635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:39 compute-0 python3.9[47637]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 26 12:33:40 compute-0 kernel: SELinux:  Converting 2745 SID table entries...
Nov 26 12:33:40 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 12:33:40 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 26 12:33:40 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 12:33:40 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 26 12:33:40 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 12:33:40 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 12:33:40 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 12:33:40 compute-0 sudo[47635]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:41 compute-0 python3.9[47792]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:33:41 compute-0 sudo[47948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xztklrrxybumqmmkzbeeyvqmwjfejxdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160421.4406025-120-161782325666467/AnsiballZ_dnf.py'
Nov 26 12:33:41 compute-0 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 26 12:33:41 compute-0 sudo[47948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:41 compute-0 python3.9[47950]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:33:42 compute-0 sudo[47948]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:43 compute-0 sudo[48101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beavyxspweiwgldvzzypjnhfwmbhrpgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160422.8944108-128-9109082254003/AnsiballZ_command.py'
Nov 26 12:33:43 compute-0 sudo[48101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:43 compute-0 python3.9[48103]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:33:43 compute-0 sudo[48101]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:44 compute-0 sudo[48388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blvgybemplyhoctyvnepbljesdhamipo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160423.9694412-136-226867523937693/AnsiballZ_file.py'
Nov 26 12:33:44 compute-0 sudo[48388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:44 compute-0 python3.9[48390]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 12:33:44 compute-0 sudo[48388]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:45 compute-0 python3.9[48540]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:33:45 compute-0 sudo[48692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkdebnnntznejrktoldjlicxoxmvzruf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160425.1875796-152-261118405027313/AnsiballZ_dnf.py'
Nov 26 12:33:45 compute-0 sudo[48692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:45 compute-0 python3.9[48694]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:33:48 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 12:33:48 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 12:33:48 compute-0 systemd[1]: Reloading.
Nov 26 12:33:48 compute-0 systemd-rc-local-generator[48726]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:33:48 compute-0 systemd-sysv-generator[48729]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:33:48 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 12:33:48 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 12:33:48 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 12:33:48 compute-0 systemd[1]: run-r96993d5388e2445e9478218b347fba95.service: Deactivated successfully.
Nov 26 12:33:48 compute-0 sudo[48692]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:49 compute-0 sudo[49009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flgjcilinpglecqnzilbukntxaxwcsnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160428.9203782-160-189698875049017/AnsiballZ_systemd.py'
Nov 26 12:33:49 compute-0 sudo[49009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:49 compute-0 python3.9[49011]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:33:49 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 26 12:33:49 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 26 12:33:49 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 26 12:33:49 compute-0 systemd[1]: Stopping Network Manager...
Nov 26 12:33:49 compute-0 NetworkManager[7252]: <info>  [1764160429.3492] caught SIGTERM, shutting down normally.
Nov 26 12:33:49 compute-0 NetworkManager[7252]: <info>  [1764160429.3500] dhcp4 (eth0): canceled DHCP transaction
Nov 26 12:33:49 compute-0 NetworkManager[7252]: <info>  [1764160429.3500] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 12:33:49 compute-0 NetworkManager[7252]: <info>  [1764160429.3500] dhcp4 (eth0): state changed no lease
Nov 26 12:33:49 compute-0 NetworkManager[7252]: <info>  [1764160429.3501] dhcp6 (eth0): canceled DHCP transaction
Nov 26 12:33:49 compute-0 NetworkManager[7252]: <info>  [1764160429.3501] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 12:33:49 compute-0 NetworkManager[7252]: <info>  [1764160429.3501] dhcp6 (eth0): state changed no lease
Nov 26 12:33:49 compute-0 NetworkManager[7252]: <info>  [1764160429.3502] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 12:33:49 compute-0 NetworkManager[7252]: <info>  [1764160429.3533] exiting (success)
Nov 26 12:33:49 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 12:33:49 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 12:33:49 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 26 12:33:49 compute-0 systemd[1]: Stopped Network Manager.
Nov 26 12:33:49 compute-0 systemd[1]: Starting Network Manager...
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.3990] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:031c7117-1661-4641-8ff4-d1885bc6a83e)
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.3991] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4029] manager[0x561e4534d010]: monitoring kernel firmware directory '/lib/firmware'.
Nov 26 12:33:49 compute-0 systemd[1]: Starting Hostname Service...
Nov 26 12:33:49 compute-0 systemd[1]: Started Hostname Service.
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4582] hostname: hostname: using hostnamed
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4583] hostname: static hostname changed from (none) to "compute-0"
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4585] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4588] manager[0x561e4534d010]: rfkill: Wi-Fi hardware radio set enabled
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4588] manager[0x561e4534d010]: rfkill: WWAN hardware radio set enabled
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4601] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4608] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4608] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4608] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4609] manager: Networking is enabled by state file
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4610] settings: Loaded settings plugin: keyfile (internal)
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4613] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4629] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4634] dhcp: init: Using DHCP client 'internal'
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4636] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4639] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4642] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4647] device (lo): Activation: starting connection 'lo' (14d47366-79b4-47b4-8c24-e57561e2dedc)
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4651] device (eth0): carrier: link connected
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4654] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4657] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4658] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4661] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4665] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4669] device (eth1): carrier: link connected
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4671] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4674] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (7797382d-d835-51bb-84eb-feed5516994b) (indicated)
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4674] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4678] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4682] device (eth1): Activation: starting connection 'ci-private-network' (7797382d-d835-51bb-84eb-feed5516994b)
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4685] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 26 12:33:49 compute-0 systemd[1]: Started Network Manager.
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4689] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4690] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4691] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4692] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4694] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4695] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4697] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4699] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4702] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4704] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4705] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4710] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4713] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4718] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4723] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4724] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4727] device (lo): Activation: successful, device activated.
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4731] dhcp4 (eth0): state changed new lease, address=192.168.26.109
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4735] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4756] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 12:33:49 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4763] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4766] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 26 12:33:49 compute-0 NetworkManager[49024]: <info>  [1764160429.4770] device (eth1): Activation: successful, device activated.
Nov 26 12:33:49 compute-0 sudo[49009]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:49 compute-0 sudo[49218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpliqlzvvdjbjwhlvocgehwznqpydklc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160429.613157-168-84013648691361/AnsiballZ_dnf.py'
Nov 26 12:33:49 compute-0 sudo[49218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:49 compute-0 python3.9[49220]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:33:50 compute-0 NetworkManager[49024]: <info>  [1764160430.5628] dhcp6 (eth0): state changed new lease, address=2001:db8::f0
Nov 26 12:33:50 compute-0 NetworkManager[49024]: <info>  [1764160430.5636] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 12:33:50 compute-0 NetworkManager[49024]: <info>  [1764160430.5670] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 12:33:50 compute-0 NetworkManager[49024]: <info>  [1764160430.5671] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 12:33:50 compute-0 NetworkManager[49024]: <info>  [1764160430.5673] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 12:33:50 compute-0 NetworkManager[49024]: <info>  [1764160430.5675] device (eth0): Activation: successful, device activated.
Nov 26 12:33:50 compute-0 NetworkManager[49024]: <info>  [1764160430.5678] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 26 12:33:50 compute-0 NetworkManager[49024]: <info>  [1764160430.5680] manager: startup complete
Nov 26 12:33:50 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 26 12:33:55 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 12:33:55 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 12:33:55 compute-0 systemd[1]: Reloading.
Nov 26 12:33:55 compute-0 systemd-sysv-generator[49293]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:33:55 compute-0 systemd-rc-local-generator[49287]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:33:55 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 12:33:56 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 12:33:56 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 12:33:56 compute-0 systemd[1]: run-r36894f7c132443c6ad0a134a7ff4402e.service: Deactivated successfully.
Nov 26 12:33:56 compute-0 sudo[49218]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:56 compute-0 sudo[49698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtuandpksgkqkecsbiquclteauvyarqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160436.7419496-180-77821990811704/AnsiballZ_stat.py'
Nov 26 12:33:56 compute-0 sudo[49698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:57 compute-0 python3.9[49700]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:33:57 compute-0 sudo[49698]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:57 compute-0 sudo[49850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztjfavtpcifhspvjtceqcshiszqhibmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160437.2795236-189-209603434116263/AnsiballZ_ini_file.py'
Nov 26 12:33:57 compute-0 sudo[49850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:57 compute-0 python3.9[49852]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:33:57 compute-0 sudo[49850]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:58 compute-0 sudo[50004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziodqerurscglrnlnvwbbreyzeyhmbps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160437.996989-199-276287838515414/AnsiballZ_ini_file.py'
Nov 26 12:33:58 compute-0 sudo[50004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:58 compute-0 python3.9[50006]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:33:58 compute-0 sudo[50004]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:58 compute-0 sudo[50156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teasygghhbjitrymyovjujpxfbuqoukf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160438.4171648-199-255234324697370/AnsiballZ_ini_file.py'
Nov 26 12:33:58 compute-0 sudo[50156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:58 compute-0 python3.9[50158]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:33:58 compute-0 sudo[50156]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:59 compute-0 sudo[50310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxdomdlnabupffqepnpzbrtwkctgckuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160438.8753505-214-217636475102087/AnsiballZ_ini_file.py'
Nov 26 12:33:59 compute-0 sudo[50310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:59 compute-0 python3.9[50312]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:33:59 compute-0 sudo[50310]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:59 compute-0 sudo[50462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irheiyaztwlkwiynjnrsrdvdtgcptycq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160439.328631-214-207723117606978/AnsiballZ_ini_file.py'
Nov 26 12:33:59 compute-0 sudo[50462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:33:59 compute-0 python3.9[50464]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:33:59 compute-0 sudo[50462]: pam_unix(sudo:session): session closed for user root
Nov 26 12:33:59 compute-0 sudo[50614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgtikntkgxhyetdjvtdxcsgpazlbmnpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160439.7785628-229-261859560211499/AnsiballZ_stat.py'
Nov 26 12:33:59 compute-0 sudo[50614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:00 compute-0 python3.9[50616]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:34:00 compute-0 sudo[50614]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:00 compute-0 sudo[50737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oukicmhwxakxggdyyyyugnflmmmuigcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160439.7785628-229-261859560211499/AnsiballZ_copy.py'
Nov 26 12:34:00 compute-0 sudo[50737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:00 compute-0 python3.9[50739]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160439.7785628-229-261859560211499/.source _original_basename=.j_7zdh79 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:34:00 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 12:34:00 compute-0 sudo[50737]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:00 compute-0 sudo[50889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eveshjikvtljonpgmgwikathkihoegkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160440.7431102-244-73890133601810/AnsiballZ_file.py'
Nov 26 12:34:00 compute-0 sudo[50889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:01 compute-0 python3.9[50891]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:34:01 compute-0 sudo[50889]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:01 compute-0 sudo[51041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuiqvfzluttjxybhaxkxdijodydaqkcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160441.1967251-252-46307920088123/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 26 12:34:01 compute-0 sudo[51041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:01 compute-0 python3.9[51043]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 26 12:34:01 compute-0 sudo[51041]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:02 compute-0 sudo[51193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdqotwvcucrseugviovlsdvmlkjujikk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160441.828635-261-103595761461572/AnsiballZ_file.py'
Nov 26 12:34:02 compute-0 sudo[51193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:02 compute-0 python3.9[51195]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:34:02 compute-0 sudo[51193]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:02 compute-0 sudo[51345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqvqjugqeiyjblmtrdnnddwnkhnqvgpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160442.4707034-271-25594701008991/AnsiballZ_stat.py'
Nov 26 12:34:02 compute-0 sudo[51345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:02 compute-0 sudo[51345]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:03 compute-0 sudo[51468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnmtduuxlsefiknxxnirhnnofvugcebm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160442.4707034-271-25594701008991/AnsiballZ_copy.py'
Nov 26 12:34:03 compute-0 sudo[51468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:03 compute-0 sudo[51468]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:03 compute-0 sudo[51620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkebkofrkxtohclgeyufertgjwmzdgeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160443.2986069-286-82934973053495/AnsiballZ_slurp.py'
Nov 26 12:34:03 compute-0 sudo[51620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:03 compute-0 python3.9[51622]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 26 12:34:03 compute-0 sudo[51620]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:04 compute-0 sudo[51795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-habynxxhqbscdrqagvgdpxhaljymcali ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160443.8974338-295-265983123450112/async_wrapper.py j737676950907 300 /home/zuul/.ansible/tmp/ansible-tmp-1764160443.8974338-295-265983123450112/AnsiballZ_edpm_os_net_config.py _'
Nov 26 12:34:04 compute-0 sudo[51795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:04 compute-0 ansible-async_wrapper.py[51797]: Invoked with j737676950907 300 /home/zuul/.ansible/tmp/ansible-tmp-1764160443.8974338-295-265983123450112/AnsiballZ_edpm_os_net_config.py _
Nov 26 12:34:04 compute-0 ansible-async_wrapper.py[51800]: Starting module and watcher
Nov 26 12:34:04 compute-0 ansible-async_wrapper.py[51800]: Start watching 51801 (300)
Nov 26 12:34:04 compute-0 ansible-async_wrapper.py[51801]: Start module (51801)
Nov 26 12:34:04 compute-0 ansible-async_wrapper.py[51797]: Return async_wrapper task started.
Nov 26 12:34:04 compute-0 sudo[51795]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:04 compute-0 python3.9[51802]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 26 12:34:05 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 26 12:34:05 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 26 12:34:05 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 26 12:34:05 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 26 12:34:05 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 26 12:34:05 compute-0 NetworkManager[49024]: <info>  [1764160445.9772] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51803 uid=0 result="success"
Nov 26 12:34:05 compute-0 NetworkManager[49024]: <info>  [1764160445.9787] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0146] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0147] audit: op="connection-add" uuid="1243cce9-421d-4253-b9ed-b59bd081783d" name="br-ex-br" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0157] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0158] audit: op="connection-add" uuid="a0e24a11-f180-4069-8ae5-827540d8884f" name="br-ex-port" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0167] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0168] audit: op="connection-add" uuid="c2517814-d9f4-44f9-9043-95bf252a8f9d" name="eth1-port" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0176] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0177] audit: op="connection-add" uuid="0cfa67fb-288b-46a2-82e9-107bddfde4c7" name="vlan20-port" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0185] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0185] audit: op="connection-add" uuid="6d37eb1c-302e-4556-b2dc-6307938d5eaf" name="vlan21-port" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0193] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0194] audit: op="connection-add" uuid="6d78c477-4735-44da-8d44-e935b3b614ab" name="vlan22-port" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0202] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0203] audit: op="connection-add" uuid="e667de12-3d14-45b8-9a07-40279d5a0a48" name="vlan23-port" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0217] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.dhcp-timeout,ipv6.may-fail,ipv6.routes,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0230] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0231] audit: op="connection-add" uuid="cf0fd381-56be-47d9-948c-451180b92cf3" name="br-ex-if" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0247] audit: op="connection-update" uuid="7797382d-d835-51bb-84eb-feed5516994b" name="ci-private-network" args="ovs-interface.type,connection.slave-type,connection.timestamp,connection.master,connection.controller,connection.port-type,ipv4.addresses,ipv4.method,ipv4.routes,ipv4.dns,ipv4.routing-rules,ipv4.never-default,ipv6.addresses,ipv6.method,ipv6.routes,ipv6.dns,ipv6.routing-rules,ipv6.addr-gen-mode,ovs-external-ids.data" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0259] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0260] audit: op="connection-add" uuid="329a5fc7-eb9b-4753-b5f4-22dc70e0e1e9" name="vlan20-if" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0271] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0272] audit: op="connection-add" uuid="26e682c0-91cc-4cf7-b67b-7d83e9fe2579" name="vlan21-if" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0282] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0283] audit: op="connection-add" uuid="2337e498-43ae-4e56-bf6f-ba7a085c31a7" name="vlan22-if" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0294] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0295] audit: op="connection-add" uuid="b221e218-d89b-4dde-9771-ab3fa0888d9f" name="vlan23-if" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0303] audit: op="connection-delete" uuid="09541ed1-27f0-3dab-920e-bf33aaba73ff" name="Wired connection 1" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0312] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0318] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0320] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (1243cce9-421d-4253-b9ed-b59bd081783d)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0321] audit: op="connection-activate" uuid="1243cce9-421d-4253-b9ed-b59bd081783d" name="br-ex-br" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0322] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0326] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0328] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (a0e24a11-f180-4069-8ae5-827540d8884f)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0329] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0333] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0335] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (c2517814-d9f4-44f9-9043-95bf252a8f9d)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0336] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0340] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0342] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (0cfa67fb-288b-46a2-82e9-107bddfde4c7)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0343] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0347] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0349] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (6d37eb1c-302e-4556-b2dc-6307938d5eaf)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0350] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0354] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0356] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (6d78c477-4735-44da-8d44-e935b3b614ab)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0357] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0361] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0363] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (e667de12-3d14-45b8-9a07-40279d5a0a48)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0364] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0365] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0366] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0370] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0373] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0375] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (cf0fd381-56be-47d9-948c-451180b92cf3)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0376] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0378] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0379] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0379] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0380] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0386] device (eth1): disconnecting for new activation request.
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0386] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0388] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0389] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0390] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0391] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0393] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0395] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (329a5fc7-eb9b-4753-b5f4-22dc70e0e1e9)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0396] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0398] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0399] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0399] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0405] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0407] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0409] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (26e682c0-91cc-4cf7-b67b-7d83e9fe2579)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0410] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0411] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0412] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0413] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0414] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0417] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0419] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (2337e498-43ae-4e56-bf6f-ba7a085c31a7)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0420] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0421] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0422] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0423] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0424] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0427] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0429] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (b221e218-d89b-4dde-9771-ab3fa0888d9f)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0430] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0431] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0432] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0433] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0434] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0442] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.method,ipv6.may-fail,ipv6.routes,ipv6.addr-gen-mode,802-3-ethernet.mtu" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0444] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0446] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0447] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0451] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0453] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 26 12:34:06 compute-0 kernel: Timeout policy base is empty
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0455] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0458] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0459] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0461] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0464] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0465] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0466] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0469] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0471] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0473] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0474] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0477] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0480] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0481] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0482] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0485] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 systemd-udevd[51809]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0487] dhcp4 (eth0): canceled DHCP transaction
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0487] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0487] dhcp4 (eth0): state changed no lease
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0488] dhcp6 (eth0): canceled DHCP transaction
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0488] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0488] dhcp6 (eth0): state changed no lease
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0491] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 26 12:34:06 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0497] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0499] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51803 uid=0 result="fail" reason="Device is not activated"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0528] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0529] dhcp4 (eth0): state changed new lease, address=192.168.26.109
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0545] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0571] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0600] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0614] device (eth1): disconnecting for new activation request.
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0615] audit: op="connection-activate" uuid="7797382d-d835-51bb-84eb-feed5516994b" name="ci-private-network" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0652] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0701] device (eth1): Activation: starting connection 'ci-private-network' (7797382d-d835-51bb-84eb-feed5516994b)
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0704] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0705] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51803 uid=0 result="success"
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0708] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0710] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0713] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0715] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0718] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0719] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0720] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0720] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0721] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0722] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0723] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0727] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0730] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0731] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0733] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0735] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0738] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0739] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0742] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0744] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0746] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0748] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0751] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0762] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0764] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 kernel: br-ex: entered promiscuous mode
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0801] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0803] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0806] device (eth1): Activation: successful, device activated.
Nov 26 12:34:06 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0882] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0889] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 kernel: vlan22: entered promiscuous mode
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0916] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0919] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0922] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0981] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.0988] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1011] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1012] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1015] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 12:34:06 compute-0 kernel: vlan21: entered promiscuous mode
Nov 26 12:34:06 compute-0 kernel: vlan23: entered promiscuous mode
Nov 26 12:34:06 compute-0 systemd-udevd[51808]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1126] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 26 12:34:06 compute-0 kernel: vlan20: entered promiscuous mode
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1145] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1210] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1213] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1213] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1217] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1227] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1247] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1249] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1250] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1253] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1275] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1291] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1292] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 12:34:06 compute-0 NetworkManager[49024]: <info>  [1764160446.1296] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 12:34:07 compute-0 NetworkManager[49024]: <info>  [1764160447.2228] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51803 uid=0 result="success"
Nov 26 12:34:07 compute-0 NetworkManager[49024]: <info>  [1764160447.3379] checkpoint[0x561e45324950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 26 12:34:07 compute-0 NetworkManager[49024]: <info>  [1764160447.3381] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51803 uid=0 result="success"
Nov 26 12:34:07 compute-0 NetworkManager[49024]: <info>  [1764160447.4539] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51803 uid=0 result="success"
Nov 26 12:34:07 compute-0 NetworkManager[49024]: <info>  [1764160447.4549] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51803 uid=0 result="success"
Nov 26 12:34:07 compute-0 NetworkManager[49024]: <info>  [1764160447.6126] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51803 uid=0 result="success"
Nov 26 12:34:07 compute-0 NetworkManager[49024]: <info>  [1764160447.7268] checkpoint[0x561e45324a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 26 12:34:07 compute-0 NetworkManager[49024]: <info>  [1764160447.7272] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51803 uid=0 result="success"
Nov 26 12:34:07 compute-0 NetworkManager[49024]: <info>  [1764160447.9631] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51803 uid=0 result="success"
Nov 26 12:34:07 compute-0 NetworkManager[49024]: <info>  [1764160447.9643] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51803 uid=0 result="success"
Nov 26 12:34:07 compute-0 sudo[52156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynddrxhfjvqelnlfbftpuolfuqubeuru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160447.621027-295-186181571497076/AnsiballZ_async_status.py'
Nov 26 12:34:07 compute-0 sudo[52156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:08 compute-0 NetworkManager[49024]: <info>  [1764160448.1257] audit: op="networking-control" arg="global-dns-configuration" pid=51803 uid=0 result="success"
Nov 26 12:34:08 compute-0 NetworkManager[49024]: <info>  [1764160448.1269] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf)
Nov 26 12:34:08 compute-0 NetworkManager[49024]: <info>  [1764160448.1274] audit: op="networking-control" arg="global-dns-configuration" pid=51803 uid=0 result="success"
Nov 26 12:34:08 compute-0 python3.9[52158]: ansible-ansible.legacy.async_status Invoked with jid=j737676950907.51797 mode=status _async_dir=/root/.ansible_async
Nov 26 12:34:08 compute-0 NetworkManager[49024]: <info>  [1764160448.1336] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51803 uid=0 result="success"
Nov 26 12:34:08 compute-0 sudo[52156]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:08 compute-0 NetworkManager[49024]: <info>  [1764160448.2466] checkpoint[0x561e45324af0]: destroy /org/freedesktop/NetworkManager/Checkpoint/3
Nov 26 12:34:08 compute-0 NetworkManager[49024]: <info>  [1764160448.2469] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51803 uid=0 result="success"
Nov 26 12:34:08 compute-0 ansible-async_wrapper.py[51801]: Module complete (51801)
Nov 26 12:34:09 compute-0 ansible-async_wrapper.py[51800]: Done in kid B.
Nov 26 12:34:11 compute-0 sudo[52260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snflnqlxircilnqxfjhigcvzxnbqkvkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160447.621027-295-186181571497076/AnsiballZ_async_status.py'
Nov 26 12:34:11 compute-0 sudo[52260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:11 compute-0 python3.9[52262]: ansible-ansible.legacy.async_status Invoked with jid=j737676950907.51797 mode=status _async_dir=/root/.ansible_async
Nov 26 12:34:11 compute-0 sudo[52260]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:11 compute-0 sudo[52360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juzoamoegnucsmdsejeyvknejixkcgss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160447.621027-295-186181571497076/AnsiballZ_async_status.py'
Nov 26 12:34:11 compute-0 sudo[52360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:11 compute-0 python3.9[52362]: ansible-ansible.legacy.async_status Invoked with jid=j737676950907.51797 mode=cleanup _async_dir=/root/.ansible_async
Nov 26 12:34:11 compute-0 sudo[52360]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:12 compute-0 sudo[52512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehjmrncollyqspcqfafqbomxnpvendxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160451.972208-322-128240042946258/AnsiballZ_stat.py'
Nov 26 12:34:12 compute-0 sudo[52512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:12 compute-0 python3.9[52514]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:34:12 compute-0 sudo[52512]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:12 compute-0 sudo[52635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjcxkujmlkvnqcglzawzpqoovzcgnpyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160451.972208-322-128240042946258/AnsiballZ_copy.py'
Nov 26 12:34:12 compute-0 sudo[52635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:12 compute-0 python3.9[52637]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160451.972208-322-128240042946258/.source.returncode _original_basename=.u_cj8cq_ follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:34:12 compute-0 sudo[52635]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:13 compute-0 sudo[52787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luigqagcluvpxlqbosdgnobcbjbrific ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160452.8525379-338-52409619940726/AnsiballZ_stat.py'
Nov 26 12:34:13 compute-0 sudo[52787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:13 compute-0 python3.9[52789]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:34:13 compute-0 sudo[52787]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:13 compute-0 sudo[52910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izechuxoinlkamuvvngupzjawvufkqro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160452.8525379-338-52409619940726/AnsiballZ_copy.py'
Nov 26 12:34:13 compute-0 sudo[52910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:13 compute-0 python3.9[52912]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160452.8525379-338-52409619940726/.source.cfg _original_basename=.mm_xu9yd follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:34:13 compute-0 sudo[52910]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:13 compute-0 sudo[53062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clvofpydmithyjqcyzmwtmuupgvdzxia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160453.6888525-353-28555117808478/AnsiballZ_systemd.py'
Nov 26 12:34:13 compute-0 sudo[53062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:14 compute-0 python3.9[53064]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:34:14 compute-0 systemd[1]: Reloading Network Manager...
Nov 26 12:34:14 compute-0 NetworkManager[49024]: <info>  [1764160454.1742] audit: op="reload" arg="0" pid=53068 uid=0 result="success"
Nov 26 12:34:14 compute-0 NetworkManager[49024]: <info>  [1764160454.1747] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 26 12:34:14 compute-0 NetworkManager[49024]: <info>  [1764160454.1748] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 26 12:34:14 compute-0 systemd[1]: Reloaded Network Manager.
Nov 26 12:34:14 compute-0 sudo[53062]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:14 compute-0 sshd-session[45024]: Connection closed by 192.168.122.30 port 59206
Nov 26 12:34:14 compute-0 sshd-session[45021]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:34:14 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 26 12:34:14 compute-0 systemd[1]: session-9.scope: Consumed 35.152s CPU time.
Nov 26 12:34:14 compute-0 systemd-logind[777]: Session 9 logged out. Waiting for processes to exit.
Nov 26 12:34:14 compute-0 systemd-logind[777]: Removed session 9.
Nov 26 12:34:19 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 12:34:20 compute-0 sshd-session[53101]: Accepted publickey for zuul from 192.168.122.30 port 52496 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:34:20 compute-0 systemd-logind[777]: New session 10 of user zuul.
Nov 26 12:34:20 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 26 12:34:20 compute-0 sshd-session[53101]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:34:20 compute-0 python3.9[53254]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:34:21 compute-0 python3.9[53409]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:34:22 compute-0 python3.9[53602]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:34:22 compute-0 sshd-session[53104]: Connection closed by 192.168.122.30 port 52496
Nov 26 12:34:22 compute-0 sshd-session[53101]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:34:22 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 26 12:34:22 compute-0 systemd[1]: session-10.scope: Consumed 1.545s CPU time.
Nov 26 12:34:22 compute-0 systemd-logind[777]: Session 10 logged out. Waiting for processes to exit.
Nov 26 12:34:22 compute-0 systemd-logind[777]: Removed session 10.
Nov 26 12:34:24 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 12:34:28 compute-0 sshd-session[53631]: Accepted publickey for zuul from 192.168.122.30 port 54400 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:34:28 compute-0 systemd-logind[777]: New session 11 of user zuul.
Nov 26 12:34:28 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 26 12:34:28 compute-0 sshd-session[53631]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:34:28 compute-0 python3.9[53784]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:34:29 compute-0 python3.9[53938]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:34:30 compute-0 sudo[54092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiwkydgbcmqgyrbzcntehaggtvztjgan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160469.9882267-40-188969612591930/AnsiballZ_setup.py'
Nov 26 12:34:30 compute-0 sudo[54092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:30 compute-0 python3.9[54094]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:34:30 compute-0 sudo[54092]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:30 compute-0 sudo[54177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfpodfqoqladzvasjjdfcivtypxneaaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160469.9882267-40-188969612591930/AnsiballZ_dnf.py'
Nov 26 12:34:30 compute-0 sudo[54177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:31 compute-0 python3.9[54179]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:34:32 compute-0 sudo[54177]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:32 compute-0 sudo[54330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnodjuwvtvbvaakpxtbktblcbafyilwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160472.1545935-52-69436437602033/AnsiballZ_setup.py'
Nov 26 12:34:32 compute-0 sudo[54330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:32 compute-0 python3.9[54332]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:34:32 compute-0 sudo[54330]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:33 compute-0 sudo[54525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggnlccsammmgwupejjrsxqjghvdluesl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160472.9366176-63-863039795259/AnsiballZ_file.py'
Nov 26 12:34:33 compute-0 sudo[54525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:33 compute-0 python3.9[54527]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:34:33 compute-0 sudo[54525]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:33 compute-0 sudo[54677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdydwdafusukieopvisolfqpdlgzyuec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160473.5334148-71-144354361334146/AnsiballZ_command.py'
Nov 26 12:34:33 compute-0 sudo[54677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:33 compute-0 python3.9[54679]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:34:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck2820434000-merged.mount: Deactivated successfully.
Nov 26 12:34:34 compute-0 podman[54680]: 2025-11-26 12:34:34.017771221 +0000 UTC m=+0.029687612 system refresh
Nov 26 12:34:34 compute-0 sudo[54677]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:34 compute-0 sudo[54839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akgwbzosxhnhbihhdylujvdspohktslq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160474.1634064-79-59194933220714/AnsiballZ_stat.py'
Nov 26 12:34:34 compute-0 sudo[54839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:34 compute-0 python3.9[54841]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:34:34 compute-0 sudo[54839]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:34 compute-0 sudo[54962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeivelebtofqujloceqmcepolmyhnjgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160474.1634064-79-59194933220714/AnsiballZ_copy.py'
Nov 26 12:34:34 compute-0 sudo[54962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 12:34:35 compute-0 python3.9[54964]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160474.1634064-79-59194933220714/.source.json follow=False _original_basename=podman_network_config.j2 checksum=8661de292338a04cb796b1cfbfa124fb87eda09c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:34:35 compute-0 sudo[54962]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:35 compute-0 sudo[55114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpwykbilprqctjoqhevankrobgealxhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160475.2616408-94-241087117069501/AnsiballZ_stat.py'
Nov 26 12:34:35 compute-0 sudo[55114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:35 compute-0 python3.9[55116]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:34:35 compute-0 sudo[55114]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:35 compute-0 sudo[55237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-angmyllyokspjmspsxdimpkzflaydwdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160475.2616408-94-241087117069501/AnsiballZ_copy.py'
Nov 26 12:34:35 compute-0 sudo[55237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:35 compute-0 python3.9[55239]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764160475.2616408-94-241087117069501/.source.conf follow=False _original_basename=registries.conf.j2 checksum=74ad3fdf1c9c551f4957cab58c04bb2f8b0dc3e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:34:36 compute-0 sudo[55237]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:36 compute-0 sudo[55389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggwgetogfqqfevggwuarplnqfdseffzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160476.135893-110-271365068872169/AnsiballZ_ini_file.py'
Nov 26 12:34:36 compute-0 sudo[55389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:36 compute-0 python3.9[55391]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:34:36 compute-0 sudo[55389]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:36 compute-0 sudo[55541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmzogualhtxpugprhszqpkrwwzchaeii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160476.680862-110-10251900100853/AnsiballZ_ini_file.py'
Nov 26 12:34:36 compute-0 sudo[55541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:37 compute-0 python3.9[55543]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:34:37 compute-0 sudo[55541]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:37 compute-0 sudo[55694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giqofjxmprqqenafxeuxnohhgeuyvjtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160477.1386764-110-131046433876236/AnsiballZ_ini_file.py'
Nov 26 12:34:37 compute-0 sudo[55694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:37 compute-0 python3.9[55696]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:34:37 compute-0 sudo[55694]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:37 compute-0 sudo[55846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwfjwjbwhjwlrngkaavpysnfmwdemiku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160477.5800848-110-102833853151089/AnsiballZ_ini_file.py'
Nov 26 12:34:37 compute-0 sudo[55846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:37 compute-0 python3.9[55848]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:34:37 compute-0 sudo[55846]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:38 compute-0 sudo[55998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcwwtbphwmjobwjbjctzstgthqhzuhfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160478.0811715-141-65431646739356/AnsiballZ_dnf.py'
Nov 26 12:34:38 compute-0 sudo[55998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:38 compute-0 python3.9[56000]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:34:39 compute-0 sudo[55998]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:39 compute-0 sudo[56151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frtyciysikpfrvztzpacwozvjsamryhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160479.7024922-152-221782750609440/AnsiballZ_setup.py'
Nov 26 12:34:39 compute-0 sudo[56151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:40 compute-0 python3.9[56153]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:34:40 compute-0 sudo[56151]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:40 compute-0 sudo[56305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxzwtarxjhnayaygpistuyncuwkjxpnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160480.262202-160-8572665986624/AnsiballZ_stat.py'
Nov 26 12:34:40 compute-0 sudo[56305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:40 compute-0 python3.9[56307]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:34:40 compute-0 sudo[56305]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:40 compute-0 sudo[56457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaweosnvgdnasxjdnuksbtoguazwnuyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160480.7605865-169-189215408832514/AnsiballZ_stat.py'
Nov 26 12:34:40 compute-0 sudo[56457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:41 compute-0 python3.9[56459]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:34:41 compute-0 sudo[56457]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:41 compute-0 sudo[56609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkwewaizclxbcankkbpibdtsedauhcck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160481.2893686-179-139720676171930/AnsiballZ_command.py'
Nov 26 12:34:41 compute-0 sudo[56609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:41 compute-0 python3.9[56611]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:34:41 compute-0 sudo[56609]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:42 compute-0 sudo[56762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfosvqfobaeeilzdyolmdqlwswuexxvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160481.90241-189-156036198668630/AnsiballZ_service_facts.py'
Nov 26 12:34:42 compute-0 sudo[56762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:42 compute-0 python3.9[56764]: ansible-service_facts Invoked
Nov 26 12:34:42 compute-0 network[56781]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 12:34:42 compute-0 network[56782]: 'network-scripts' will be removed from distribution in near future.
Nov 26 12:34:42 compute-0 network[56783]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 12:34:44 compute-0 sudo[56762]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:44 compute-0 sudo[57066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxnnlajdfzoqzcpqkemhvcvgqxrtkyph ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764160484.5639367-204-112236805948800/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764160484.5639367-204-112236805948800/args'
Nov 26 12:34:44 compute-0 sudo[57066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:44 compute-0 sudo[57066]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:45 compute-0 sudo[57233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgutkcuaaytwwynzbdnhsyqetqtyelfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160485.0058715-215-75991306849472/AnsiballZ_dnf.py'
Nov 26 12:34:45 compute-0 sudo[57233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:45 compute-0 python3.9[57235]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:34:46 compute-0 sudo[57233]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:46 compute-0 sudo[57386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lobbbiksgpeybteboianyiqgzsyzzrpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160486.5426116-228-174584069727689/AnsiballZ_package_facts.py'
Nov 26 12:34:46 compute-0 sudo[57386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:47 compute-0 python3.9[57388]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 26 12:34:47 compute-0 sudo[57386]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:47 compute-0 sudo[57538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybjqyutiimhvetcoibsubyimyswrfuda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160487.709519-238-182057613396427/AnsiballZ_stat.py'
Nov 26 12:34:47 compute-0 sudo[57538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:48 compute-0 python3.9[57540]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:34:48 compute-0 sudo[57538]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:48 compute-0 sudo[57663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awohlbrdmwetrrpzqfgqvwgkpkziiram ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160487.709519-238-182057613396427/AnsiballZ_copy.py'
Nov 26 12:34:48 compute-0 sudo[57663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:48 compute-0 python3.9[57665]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160487.709519-238-182057613396427/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:34:48 compute-0 sudo[57663]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:48 compute-0 sudo[57817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gougoedobuybdtfzytfpkxmfrjhhqewv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160488.6718094-253-15708298012318/AnsiballZ_stat.py'
Nov 26 12:34:48 compute-0 sudo[57817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:49 compute-0 python3.9[57819]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:34:49 compute-0 sudo[57817]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:49 compute-0 sudo[57942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzgpvsnuqyocnypaanazhjjisbcnbwgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160488.6718094-253-15708298012318/AnsiballZ_copy.py'
Nov 26 12:34:49 compute-0 sudo[57942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:49 compute-0 python3.9[57944]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160488.6718094-253-15708298012318/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:34:49 compute-0 sudo[57942]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:50 compute-0 sudo[58096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejqjzhyysgptkkqvlmauinudvdenvdis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160489.740813-274-169513133540840/AnsiballZ_lineinfile.py'
Nov 26 12:34:50 compute-0 sudo[58096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:50 compute-0 python3.9[58098]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:34:50 compute-0 sudo[58096]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:50 compute-0 sudo[58250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyvqzwcbcpzhwrzxtielerayvrezalui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160490.7157903-289-77746638489324/AnsiballZ_setup.py'
Nov 26 12:34:50 compute-0 sudo[58250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:51 compute-0 python3.9[58252]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:34:51 compute-0 sudo[58250]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:51 compute-0 sudo[58334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwneecfroaoysfofvtsmijcgacyqorqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160490.7157903-289-77746638489324/AnsiballZ_systemd.py'
Nov 26 12:34:51 compute-0 sudo[58334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:51 compute-0 python3.9[58336]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:34:52 compute-0 sudo[58334]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:52 compute-0 sudo[58488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqmdidyqkngjhmrywkcrfemuemqeuhgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160492.3659494-305-20533528187905/AnsiballZ_setup.py'
Nov 26 12:34:52 compute-0 sudo[58488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:52 compute-0 python3.9[58490]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:34:52 compute-0 sudo[58488]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:53 compute-0 sudo[58572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skyhjbealssuipafyvkvadtpycqimpsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160492.3659494-305-20533528187905/AnsiballZ_systemd.py'
Nov 26 12:34:53 compute-0 sudo[58572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:53 compute-0 python3.9[58574]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:34:53 compute-0 chronyd[784]: chronyd exiting
Nov 26 12:34:53 compute-0 systemd[1]: Stopping NTP client/server...
Nov 26 12:34:53 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 26 12:34:53 compute-0 systemd[1]: Stopped NTP client/server.
Nov 26 12:34:53 compute-0 systemd[1]: Starting NTP client/server...
Nov 26 12:34:53 compute-0 chronyd[58583]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 26 12:34:53 compute-0 chronyd[58583]: Frequency -9.271 +/- 0.316 ppm read from /var/lib/chrony/drift
Nov 26 12:34:53 compute-0 chronyd[58583]: Loaded seccomp filter (level 2)
Nov 26 12:34:53 compute-0 systemd[1]: Started NTP client/server.
Nov 26 12:34:53 compute-0 sudo[58572]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:53 compute-0 sshd-session[53634]: Connection closed by 192.168.122.30 port 54400
Nov 26 12:34:53 compute-0 sshd-session[53631]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:34:53 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 26 12:34:53 compute-0 systemd[1]: session-11.scope: Consumed 17.807s CPU time.
Nov 26 12:34:53 compute-0 systemd-logind[777]: Session 11 logged out. Waiting for processes to exit.
Nov 26 12:34:53 compute-0 systemd-logind[777]: Removed session 11.
Nov 26 12:34:58 compute-0 sshd-session[58609]: Accepted publickey for zuul from 192.168.122.30 port 45476 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:34:58 compute-0 systemd-logind[777]: New session 12 of user zuul.
Nov 26 12:34:58 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 26 12:34:58 compute-0 sshd-session[58609]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:34:58 compute-0 sudo[58762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-monoygdgngzhltomzspaobklxgynmwra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160498.3909545-22-67756619378516/AnsiballZ_file.py'
Nov 26 12:34:58 compute-0 sudo[58762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:58 compute-0 python3.9[58764]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:34:58 compute-0 sudo[58762]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:59 compute-0 sudo[58914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htygmwhjpwicddigyjfbzsumnrpzxexa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160499.0927436-34-219941388209047/AnsiballZ_stat.py'
Nov 26 12:34:59 compute-0 sudo[58914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:34:59 compute-0 python3.9[58916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:34:59 compute-0 sudo[58914]: pam_unix(sudo:session): session closed for user root
Nov 26 12:34:59 compute-0 sudo[59037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qomfrzrfodkthgzmjlvxvzowrgsmbpqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160499.0927436-34-219941388209047/AnsiballZ_copy.py'
Nov 26 12:34:59 compute-0 sudo[59037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:00 compute-0 python3.9[59039]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160499.0927436-34-219941388209047/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:00 compute-0 sudo[59037]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:00 compute-0 sshd-session[58612]: Connection closed by 192.168.122.30 port 45476
Nov 26 12:35:00 compute-0 sshd-session[58609]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:35:00 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 26 12:35:00 compute-0 systemd[1]: session-12.scope: Consumed 1.109s CPU time.
Nov 26 12:35:00 compute-0 systemd-logind[777]: Session 12 logged out. Waiting for processes to exit.
Nov 26 12:35:00 compute-0 systemd-logind[777]: Removed session 12.
Nov 26 12:35:05 compute-0 sshd-session[59064]: Accepted publickey for zuul from 192.168.122.30 port 37218 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:35:05 compute-0 systemd-logind[777]: New session 13 of user zuul.
Nov 26 12:35:05 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 26 12:35:05 compute-0 sshd-session[59064]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:35:06 compute-0 python3.9[59217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:35:06 compute-0 sudo[59371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkmdjvzcsdkapmmcusrbyuabulnndgyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160506.428743-33-25480735847962/AnsiballZ_file.py'
Nov 26 12:35:06 compute-0 sudo[59371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:06 compute-0 python3.9[59373]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:06 compute-0 sudo[59371]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:07 compute-0 sudo[59546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehxcxpxurhvwgtiburtgqfflppjucocc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160506.9851377-41-59377645806977/AnsiballZ_stat.py'
Nov 26 12:35:07 compute-0 sudo[59546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:07 compute-0 python3.9[59548]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:07 compute-0 sudo[59546]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:07 compute-0 sudo[59669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usppktofirbjcgapkqtphfdnjssvdkuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160506.9851377-41-59377645806977/AnsiballZ_copy.py'
Nov 26 12:35:07 compute-0 sudo[59669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:07 compute-0 python3.9[59671]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764160506.9851377-41-59377645806977/.source.json _original_basename=.09hf_c3x follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:07 compute-0 sudo[59669]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:08 compute-0 sudo[59821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejrqyoorwogllllpmufeuzbwfaqgrdad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160508.1581402-64-137774190168762/AnsiballZ_stat.py'
Nov 26 12:35:08 compute-0 sudo[59821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:08 compute-0 python3.9[59823]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:08 compute-0 sudo[59821]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:08 compute-0 sudo[59944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oekzoaeifwoqqshbmhutahdeaagkmfgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160508.1581402-64-137774190168762/AnsiballZ_copy.py'
Nov 26 12:35:08 compute-0 sudo[59944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:08 compute-0 python3.9[59946]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160508.1581402-64-137774190168762/.source _original_basename=.slzi8078 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:08 compute-0 sudo[59944]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:09 compute-0 sudo[60096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htlaophyjkcrwcqkplaawwfphsjlpcbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160509.046218-80-53807232523846/AnsiballZ_file.py'
Nov 26 12:35:09 compute-0 sudo[60096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:09 compute-0 python3.9[60098]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:35:09 compute-0 sudo[60096]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:09 compute-0 sudo[60248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkrgwxlbswuzycvppuleayrbgcfbitvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160509.5094385-88-234829796847992/AnsiballZ_stat.py'
Nov 26 12:35:09 compute-0 sudo[60248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:09 compute-0 python3.9[60250]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:09 compute-0 sudo[60248]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:10 compute-0 sudo[60371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdvtswnvsshpbnxcevrgemvcnlqtouyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160509.5094385-88-234829796847992/AnsiballZ_copy.py'
Nov 26 12:35:10 compute-0 sudo[60371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:10 compute-0 python3.9[60373]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764160509.5094385-88-234829796847992/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:35:10 compute-0 sudo[60371]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:10 compute-0 sudo[60523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqsqkcxvfugbdquhlgvqjewymuoavnuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160510.328495-88-14132914980075/AnsiballZ_stat.py'
Nov 26 12:35:10 compute-0 sudo[60523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:10 compute-0 python3.9[60525]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:10 compute-0 sudo[60523]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:10 compute-0 sudo[60646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-strmevwxduqwcepqvvghsidjexibqzom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160510.328495-88-14132914980075/AnsiballZ_copy.py'
Nov 26 12:35:10 compute-0 sudo[60646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:11 compute-0 python3.9[60648]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764160510.328495-88-14132914980075/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:35:11 compute-0 sudo[60646]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:11 compute-0 sudo[60798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sydxxqevjxpqtbhvyrgkqquaztcziyez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160511.1529276-117-190891558136208/AnsiballZ_file.py'
Nov 26 12:35:11 compute-0 sudo[60798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:11 compute-0 python3.9[60800]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:11 compute-0 sudo[60798]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:11 compute-0 sudo[60950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhavtwsechoqwmnndghlceauozfmlpyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160511.5941892-125-94269362455226/AnsiballZ_stat.py'
Nov 26 12:35:11 compute-0 sudo[60950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:11 compute-0 python3.9[60952]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:11 compute-0 sudo[60950]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:12 compute-0 sudo[61073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkuqtukvkxcohgghfsyobnyctuhoreia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160511.5941892-125-94269362455226/AnsiballZ_copy.py'
Nov 26 12:35:12 compute-0 sudo[61073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:12 compute-0 python3.9[61075]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160511.5941892-125-94269362455226/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:12 compute-0 sudo[61073]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:12 compute-0 sudo[61225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcxarxqttvrndlzxanzdquhvzpkifssh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160512.4004223-140-59433226095829/AnsiballZ_stat.py'
Nov 26 12:35:12 compute-0 sudo[61225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:12 compute-0 python3.9[61227]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:12 compute-0 sudo[61225]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:12 compute-0 sudo[61348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isizcghdkdgxicnotijgdtellcqcfgws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160512.4004223-140-59433226095829/AnsiballZ_copy.py'
Nov 26 12:35:12 compute-0 sudo[61348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:13 compute-0 python3.9[61350]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160512.4004223-140-59433226095829/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:13 compute-0 sudo[61348]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:13 compute-0 sudo[61500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-melnchvhjbsrzfbcptjwqxsjufsqgkxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160513.214397-155-258697572621477/AnsiballZ_systemd.py'
Nov 26 12:35:13 compute-0 sudo[61500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:13 compute-0 python3.9[61502]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:35:13 compute-0 systemd[1]: Reloading.
Nov 26 12:35:13 compute-0 systemd-rc-local-generator[61526]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:35:13 compute-0 systemd-sysv-generator[61530]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:35:14 compute-0 systemd[1]: Reloading.
Nov 26 12:35:14 compute-0 systemd-sysv-generator[61561]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:35:14 compute-0 systemd-rc-local-generator[61558]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:35:14 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 26 12:35:14 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 26 12:35:14 compute-0 sudo[61500]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:14 compute-0 sudo[61725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggtevtprodyqlrhgqmhzmbidaaoxidfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160514.3209085-163-108951258951318/AnsiballZ_stat.py'
Nov 26 12:35:14 compute-0 sudo[61725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:14 compute-0 python3.9[61727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:14 compute-0 sudo[61725]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:14 compute-0 sudo[61848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgcermwipblfpakkbsngckcoxexeqddp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160514.3209085-163-108951258951318/AnsiballZ_copy.py'
Nov 26 12:35:14 compute-0 sudo[61848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:15 compute-0 python3.9[61850]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160514.3209085-163-108951258951318/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:15 compute-0 sudo[61848]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:15 compute-0 sudo[62000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qllrtngvkifuqhihgxszufcblemdrbgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160515.1699994-178-136309886822281/AnsiballZ_stat.py'
Nov 26 12:35:15 compute-0 sudo[62000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:15 compute-0 python3.9[62002]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:15 compute-0 sudo[62000]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:15 compute-0 sudo[62123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsbwdzszrojqifkcwvpfoxekwtruntlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160515.1699994-178-136309886822281/AnsiballZ_copy.py'
Nov 26 12:35:15 compute-0 sudo[62123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:15 compute-0 python3.9[62125]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160515.1699994-178-136309886822281/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:15 compute-0 sudo[62123]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:16 compute-0 sudo[62275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyjqmdckwrdsowizixyicqunkjjnvech ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160516.0530453-193-89303095526270/AnsiballZ_systemd.py'
Nov 26 12:35:16 compute-0 sudo[62275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:16 compute-0 python3.9[62277]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:35:16 compute-0 systemd[1]: Reloading.
Nov 26 12:35:16 compute-0 systemd-rc-local-generator[62298]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:35:16 compute-0 systemd-sysv-generator[62301]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:35:16 compute-0 systemd[1]: Reloading.
Nov 26 12:35:16 compute-0 systemd-rc-local-generator[62335]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:35:16 compute-0 systemd-sysv-generator[62338]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:35:16 compute-0 systemd[1]: Starting Create netns directory...
Nov 26 12:35:16 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 12:35:16 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 12:35:16 compute-0 systemd[1]: Finished Create netns directory.
Nov 26 12:35:16 compute-0 sudo[62275]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:17 compute-0 python3.9[62503]: ansible-ansible.builtin.service_facts Invoked
Nov 26 12:35:17 compute-0 network[62520]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 12:35:17 compute-0 network[62521]: 'network-scripts' will be removed from distribution in near future.
Nov 26 12:35:17 compute-0 network[62522]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 12:35:19 compute-0 sudo[62782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olkzqcguuperntyappkbprhxugjbxuhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160519.3005862-209-167199693731492/AnsiballZ_systemd.py'
Nov 26 12:35:19 compute-0 sudo[62782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:19 compute-0 python3.9[62784]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:35:19 compute-0 systemd[1]: Reloading.
Nov 26 12:35:19 compute-0 systemd-sysv-generator[62812]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:35:19 compute-0 systemd-rc-local-generator[62809]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:35:19 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 26 12:35:20 compute-0 iptables.init[62824]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 26 12:35:20 compute-0 iptables.init[62824]: iptables: Flushing firewall rules: [  OK  ]
Nov 26 12:35:20 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 26 12:35:20 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 26 12:35:20 compute-0 sudo[62782]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:20 compute-0 sudo[63018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaxxunsgdkamaounvbkgnahevksfznfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160520.2744067-209-1465663589297/AnsiballZ_systemd.py'
Nov 26 12:35:20 compute-0 sudo[63018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:20 compute-0 python3.9[63020]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:35:20 compute-0 sudo[63018]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:21 compute-0 sudo[63172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnwvkgxudgdkbotqntnxexeqnhrbgfpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160520.868599-225-225324631155385/AnsiballZ_systemd.py'
Nov 26 12:35:21 compute-0 sudo[63172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:21 compute-0 python3.9[63174]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:35:21 compute-0 systemd[1]: Reloading.
Nov 26 12:35:21 compute-0 systemd-sysv-generator[63200]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:35:21 compute-0 systemd-rc-local-generator[63197]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:35:21 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 26 12:35:21 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 26 12:35:21 compute-0 sudo[63172]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:21 compute-0 sudo[63363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhehcibsvqnhpdfwlpgbyfxhqjiweigf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160521.6214738-233-30180200401289/AnsiballZ_command.py'
Nov 26 12:35:21 compute-0 sudo[63363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:22 compute-0 python3.9[63365]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:35:22 compute-0 sudo[63363]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:22 compute-0 sudo[63516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqpopxzeiglygsugiilyitbfaidvsxlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160522.306581-247-209223064560748/AnsiballZ_stat.py'
Nov 26 12:35:22 compute-0 sudo[63516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:22 compute-0 python3.9[63518]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:22 compute-0 sudo[63516]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:22 compute-0 sudo[63641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bunbheihdcqagnmpkgcikolnyumiycad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160522.306581-247-209223064560748/AnsiballZ_copy.py'
Nov 26 12:35:22 compute-0 sudo[63641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:23 compute-0 python3.9[63643]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160522.306581-247-209223064560748/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:23 compute-0 sudo[63641]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:23 compute-0 sudo[63794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eycnywkvxqaliyklgfczjanfsxkpadsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160523.133462-262-233313573302903/AnsiballZ_systemd.py'
Nov 26 12:35:23 compute-0 sudo[63794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:23 compute-0 python3.9[63796]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:35:23 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 26 12:35:23 compute-0 sshd[963]: Received SIGHUP; restarting.
Nov 26 12:35:23 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 26 12:35:23 compute-0 sshd[963]: Server listening on 0.0.0.0 port 22.
Nov 26 12:35:23 compute-0 sshd[963]: Server listening on :: port 22.
Nov 26 12:35:23 compute-0 sudo[63794]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:23 compute-0 sudo[63950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxnzvbrjvjkozwoxnarvbzqqttrpqkig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160523.703986-270-13731874356429/AnsiballZ_file.py'
Nov 26 12:35:23 compute-0 sudo[63950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:24 compute-0 python3.9[63952]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:24 compute-0 sudo[63950]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:24 compute-0 sudo[64102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svcctyqxhdnzlnpecomdwyycmdxxrxry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160524.1393857-278-89339686471090/AnsiballZ_stat.py'
Nov 26 12:35:24 compute-0 sudo[64102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:24 compute-0 python3.9[64104]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:24 compute-0 sudo[64102]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:24 compute-0 sudo[64225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntsqpizqkbwcgihtalqdbmxgjbekvfpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160524.1393857-278-89339686471090/AnsiballZ_copy.py'
Nov 26 12:35:24 compute-0 sudo[64225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:24 compute-0 python3.9[64227]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160524.1393857-278-89339686471090/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:24 compute-0 sudo[64225]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:25 compute-0 sudo[64377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jycpjzpmssgmykitftsrmgbtypkhrqwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160525.0025759-296-195647035669610/AnsiballZ_timezone.py'
Nov 26 12:35:25 compute-0 sudo[64377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:25 compute-0 python3.9[64379]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 26 12:35:25 compute-0 systemd[1]: Starting Time & Date Service...
Nov 26 12:35:25 compute-0 systemd[1]: Started Time & Date Service.
Nov 26 12:35:25 compute-0 sudo[64377]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:25 compute-0 sudo[64533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxxywrmlxlqdqurskfuiagpnvoboxpvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160525.6607196-305-180653570721605/AnsiballZ_file.py'
Nov 26 12:35:25 compute-0 sudo[64533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:26 compute-0 python3.9[64535]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:26 compute-0 sudo[64533]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:26 compute-0 sudo[64685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbhlmfamcaectyoihyivjrtpynhnemee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160526.1321833-313-248952859833181/AnsiballZ_stat.py'
Nov 26 12:35:26 compute-0 sudo[64685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:26 compute-0 python3.9[64687]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:26 compute-0 sudo[64685]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:26 compute-0 sudo[64808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdwochtelkehrhqdxueenfupdthhapbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160526.1321833-313-248952859833181/AnsiballZ_copy.py'
Nov 26 12:35:26 compute-0 sudo[64808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:26 compute-0 python3.9[64810]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160526.1321833-313-248952859833181/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:26 compute-0 sudo[64808]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:27 compute-0 sudo[64960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqundrbrjmilbdzjcxusqlcofngiykpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160527.007756-328-202451966565404/AnsiballZ_stat.py'
Nov 26 12:35:27 compute-0 sudo[64960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:27 compute-0 python3.9[64962]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:27 compute-0 sudo[64960]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:27 compute-0 sudo[65083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxdhrmyskuvwroivtrpjxbtsmoraxlrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160527.007756-328-202451966565404/AnsiballZ_copy.py'
Nov 26 12:35:27 compute-0 sudo[65083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:27 compute-0 python3.9[65085]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160527.007756-328-202451966565404/.source.yaml _original_basename=.7ug6e2qj follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:27 compute-0 sudo[65083]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:28 compute-0 sudo[65235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enswbeuthwjajuuledjfltjaxiwkfdaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160527.8461552-343-109314030339093/AnsiballZ_stat.py'
Nov 26 12:35:28 compute-0 sudo[65235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:28 compute-0 python3.9[65237]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:28 compute-0 sudo[65235]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:28 compute-0 sudo[65358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjxewqnmaojxxwxubpnmkpjilgrjfiuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160527.8461552-343-109314030339093/AnsiballZ_copy.py'
Nov 26 12:35:28 compute-0 sudo[65358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:28 compute-0 python3.9[65360]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160527.8461552-343-109314030339093/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:28 compute-0 sudo[65358]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:28 compute-0 sudo[65510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzzbglmdfbubeopvwxfwloowmfnqmzhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160528.6825871-358-125485242650792/AnsiballZ_command.py'
Nov 26 12:35:28 compute-0 sudo[65510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:29 compute-0 python3.9[65512]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:35:29 compute-0 sudo[65510]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:29 compute-0 sudo[65663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdjdrvtlbfgseqqnjpfoatpwpxgbferk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160529.1652825-366-154619197390242/AnsiballZ_command.py'
Nov 26 12:35:29 compute-0 sudo[65663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:29 compute-0 python3.9[65665]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:35:29 compute-0 sudo[65663]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:29 compute-0 sudo[65816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnltjiceffzwsdomlmpevujraxkzxfpv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764160529.656604-374-72852161406240/AnsiballZ_edpm_nftables_from_files.py'
Nov 26 12:35:29 compute-0 sudo[65816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:30 compute-0 python3[65818]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 12:35:30 compute-0 sudo[65816]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:30 compute-0 sudo[65968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qagtyjayaaddgxuqhrxxfjzyndprkctk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160530.2557251-382-10480013966688/AnsiballZ_stat.py'
Nov 26 12:35:30 compute-0 sudo[65968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:30 compute-0 python3.9[65970]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:30 compute-0 sudo[65968]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:30 compute-0 sudo[66091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gishddbcfemvxyrwmbjxeefbilxzhpcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160530.2557251-382-10480013966688/AnsiballZ_copy.py'
Nov 26 12:35:30 compute-0 sudo[66091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:30 compute-0 python3.9[66093]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160530.2557251-382-10480013966688/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:30 compute-0 sudo[66091]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:31 compute-0 sudo[66243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlyfrautdfrzzzhmqmaeutrkqryapqbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160531.1011791-397-262642831478259/AnsiballZ_stat.py'
Nov 26 12:35:31 compute-0 sudo[66243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:31 compute-0 python3.9[66245]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:31 compute-0 sudo[66243]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:31 compute-0 sudo[66366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtyugsmheuovtwifzzshcefwiphnopkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160531.1011791-397-262642831478259/AnsiballZ_copy.py'
Nov 26 12:35:31 compute-0 sudo[66366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:31 compute-0 python3.9[66368]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160531.1011791-397-262642831478259/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:31 compute-0 sudo[66366]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:32 compute-0 sudo[66518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swqwbevfcqtazobvvtzpatcwobcnfnqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160531.9427478-412-230396121106319/AnsiballZ_stat.py'
Nov 26 12:35:32 compute-0 sudo[66518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:32 compute-0 python3.9[66520]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:32 compute-0 sudo[66518]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:32 compute-0 sudo[66641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgyoplrsojnaozuyuyggorrfavlsbyvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160531.9427478-412-230396121106319/AnsiballZ_copy.py'
Nov 26 12:35:32 compute-0 sudo[66641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:32 compute-0 python3.9[66643]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160531.9427478-412-230396121106319/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:32 compute-0 sudo[66641]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:32 compute-0 sudo[66793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoxqyixjdafpyjchrxwfnyeljcbpruql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160532.7527468-427-263062678697737/AnsiballZ_stat.py'
Nov 26 12:35:32 compute-0 sudo[66793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:33 compute-0 python3.9[66795]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:33 compute-0 sudo[66793]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:33 compute-0 sudo[66916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwynxwoxzaejffosddfmothpgkmmkveg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160532.7527468-427-263062678697737/AnsiballZ_copy.py'
Nov 26 12:35:33 compute-0 sudo[66916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:33 compute-0 python3.9[66918]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160532.7527468-427-263062678697737/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:33 compute-0 sudo[66916]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:33 compute-0 sudo[67068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqndmvxwqnckznjkmrnxwwcocoqyfupt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160533.5846543-442-36517671802479/AnsiballZ_stat.py'
Nov 26 12:35:33 compute-0 sudo[67068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:33 compute-0 python3.9[67070]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:35:33 compute-0 sudo[67068]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:34 compute-0 sudo[67191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyypxxjwnmwdkpzdlpkhjlzeizmrfotg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160533.5846543-442-36517671802479/AnsiballZ_copy.py'
Nov 26 12:35:34 compute-0 sudo[67191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:34 compute-0 python3.9[67193]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764160533.5846543-442-36517671802479/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:34 compute-0 sudo[67191]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:34 compute-0 sudo[67343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oftmmjdlqqykyojocyhcekbfxwkabniy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160534.45661-457-201699825220737/AnsiballZ_file.py'
Nov 26 12:35:34 compute-0 sudo[67343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:34 compute-0 python3.9[67345]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:34 compute-0 sudo[67343]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:35 compute-0 sudo[67495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoymtuhpymobttahbiyaetcqokglygxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160534.9756143-465-37619995249175/AnsiballZ_command.py'
Nov 26 12:35:35 compute-0 sudo[67495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:35 compute-0 python3.9[67497]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:35:35 compute-0 sudo[67495]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:35 compute-0 sudo[67654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suqqroypihtxcbvjjstbunhbjuevvnil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160535.4726286-473-264724186654862/AnsiballZ_blockinfile.py'
Nov 26 12:35:35 compute-0 sudo[67654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:35 compute-0 python3.9[67656]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:35 compute-0 sudo[67654]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:36 compute-0 sudo[67807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dybikodfbmchberwpbfjemusboikjrvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160536.116897-482-251867498781203/AnsiballZ_file.py'
Nov 26 12:35:36 compute-0 sudo[67807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:36 compute-0 python3.9[67809]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:36 compute-0 sudo[67807]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:36 compute-0 sudo[67959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhdaxbvrhkdmbvosripdtkyayodjlhbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160536.5399175-482-67338006330675/AnsiballZ_file.py'
Nov 26 12:35:36 compute-0 sudo[67959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:36 compute-0 python3.9[67961]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:36 compute-0 sudo[67959]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:37 compute-0 sudo[68111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnycmpvocbrgxthxhvagnxrrriixmsoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160536.9955502-497-40550569279552/AnsiballZ_mount.py'
Nov 26 12:35:37 compute-0 sudo[68111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:37 compute-0 python3.9[68113]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 12:35:37 compute-0 sudo[68111]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:37 compute-0 sudo[68264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unoqtakkxrgubzixhzcwmgssrxruvgiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160537.6205087-497-269899074739342/AnsiballZ_mount.py'
Nov 26 12:35:37 compute-0 sudo[68264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:37 compute-0 python3.9[68266]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 12:35:37 compute-0 sudo[68264]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:38 compute-0 sshd-session[59067]: Connection closed by 192.168.122.30 port 37218
Nov 26 12:35:38 compute-0 sshd-session[59064]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:35:38 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 26 12:35:38 compute-0 systemd[1]: session-13.scope: Consumed 24.477s CPU time.
Nov 26 12:35:38 compute-0 systemd-logind[777]: Session 13 logged out. Waiting for processes to exit.
Nov 26 12:35:38 compute-0 systemd-logind[777]: Removed session 13.
Nov 26 12:35:43 compute-0 sshd-session[68292]: Accepted publickey for zuul from 192.168.122.30 port 47102 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:35:43 compute-0 systemd-logind[777]: New session 14 of user zuul.
Nov 26 12:35:43 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 26 12:35:43 compute-0 sshd-session[68292]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:35:44 compute-0 sudo[68445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnjyemjvdjykfmoujpifepgkisvgerqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160543.8046641-16-219979017225451/AnsiballZ_tempfile.py'
Nov 26 12:35:44 compute-0 sudo[68445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:44 compute-0 python3.9[68447]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 26 12:35:44 compute-0 sudo[68445]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:44 compute-0 sudo[68597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxdtbbxhfejfbdjrmfkvfhzdsihzuelb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160544.3789704-28-241852324185051/AnsiballZ_stat.py'
Nov 26 12:35:44 compute-0 sudo[68597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:44 compute-0 python3.9[68599]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:35:44 compute-0 sudo[68597]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:45 compute-0 sudo[68749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efsaqacywftrfbyzshwzzyzxjthdmijv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160544.9700162-38-148986682176601/AnsiballZ_setup.py'
Nov 26 12:35:45 compute-0 sudo[68749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:45 compute-0 python3.9[68751]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:35:45 compute-0 sudo[68749]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:46 compute-0 sudo[68901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaytznwfhxinoxeqdbwjbpdjbhshisnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160545.780104-47-150804129709822/AnsiballZ_blockinfile.py'
Nov 26 12:35:46 compute-0 sudo[68901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:46 compute-0 python3.9[68903]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZE1dpxvL8OPz/VjvFsUTPfsDH6vQml5mdj02SrlFJXfQ252JoKh5fIbIe5jq+eMTBsdiCv9Uyd8xyCUarLeNlJLXFWeql+5MwT2PuY4qrfay7YgFarsvqVEneCieDB/KjZaqMenEf/yZJjvCZifypNg9Of1e8QgrIOrGdP8zeyVeSR6g7d477abOVM7jqxl1dgu5rM+rlTW4DHASE9s/qzG6qu1p1HB8ZEiKsXEtoLhomhrwcTSk94ELWY62pIn8cyapkDsX3TnUoIzQZE8wHuKD+UpY8fWfvFoKo+fdR3UnZmegzF7lylv9XeU/lSEgeDN/LggErCBVNDLBaUG54mPUhEXh3MLVnzgSeCs+DGrchncrg0mgqgKPeAPoZrH+WzFuvKCCsGBjrX8QhxkOy2Q43UXW4uIZlhuzPSsZEnqjd+oz98yWJanGeEkfPCs4nqf6Btd135JYpY2UQoryGnawaWQx/nbU9rePlzY7IbAuDaivVwT3RTKUEmoXfmis=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKuDB4s6WXjGK+4hbQXMcwUNsMga+M2cTnBcJkimQdRS
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK2PGuuGeSfke7nCSgI56m6cuyn45RHczvKouRcqVMRuIWRuDTGV0zknjmAVTtZjpkmBwAytv1rMLkBGlVHtizM=
                                             create=True mode=0644 path=/tmp/ansible.yrk3lcpn state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:46 compute-0 sudo[68901]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:46 compute-0 sudo[69053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzbgnrlgzzfljpfkvgskriqaxqecmgir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160546.3551245-55-163706809489773/AnsiballZ_command.py'
Nov 26 12:35:46 compute-0 sudo[69053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:46 compute-0 python3.9[69055]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.yrk3lcpn' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:35:46 compute-0 sudo[69053]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:47 compute-0 sudo[69207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhfblrjnycwptvxplkiekgmrtmtedkcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160546.9099379-63-96528422043131/AnsiballZ_file.py'
Nov 26 12:35:47 compute-0 sudo[69207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:47 compute-0 python3.9[69209]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.yrk3lcpn state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:47 compute-0 sudo[69207]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:47 compute-0 sshd-session[68295]: Connection closed by 192.168.122.30 port 47102
Nov 26 12:35:47 compute-0 sshd-session[68292]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:35:47 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 26 12:35:47 compute-0 systemd[1]: session-14.scope: Consumed 2.351s CPU time.
Nov 26 12:35:47 compute-0 systemd-logind[777]: Session 14 logged out. Waiting for processes to exit.
Nov 26 12:35:47 compute-0 systemd-logind[777]: Removed session 14.
Nov 26 12:35:52 compute-0 sshd-session[69234]: Accepted publickey for zuul from 192.168.122.30 port 55574 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:35:52 compute-0 systemd-logind[777]: New session 15 of user zuul.
Nov 26 12:35:52 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 26 12:35:52 compute-0 sshd-session[69234]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:35:53 compute-0 python3.9[69387]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:35:54 compute-0 sudo[69541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjkqucfdpfnzopcgyjmcmipbdoorbeln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160553.6556156-32-266628818393521/AnsiballZ_systemd.py'
Nov 26 12:35:54 compute-0 sudo[69541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:54 compute-0 python3.9[69543]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 26 12:35:54 compute-0 sudo[69541]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:54 compute-0 sudo[69695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgplizjobbltlydekkwwnkldxqmkdmtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160554.4541345-40-210280158545818/AnsiballZ_systemd.py'
Nov 26 12:35:54 compute-0 sudo[69695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:54 compute-0 python3.9[69697]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:35:54 compute-0 sudo[69695]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:55 compute-0 sudo[69848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdmaimwxaxgdmgirvtrurnjbyfrodbyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160555.0369782-49-124596113272906/AnsiballZ_command.py'
Nov 26 12:35:55 compute-0 sudo[69848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:55 compute-0 python3.9[69850]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:35:55 compute-0 sudo[69848]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:55 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 26 12:35:55 compute-0 sudo[70003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcovfkkywgolkpzklzhujhxuqyofxadc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160555.5940833-57-255646299830121/AnsiballZ_stat.py'
Nov 26 12:35:55 compute-0 sudo[70003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:56 compute-0 python3.9[70005]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:35:56 compute-0 sudo[70003]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:56 compute-0 sudo[70157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjayqycvjqalulkyyemqpmogvwegpbrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160556.1502035-65-204152715165569/AnsiballZ_command.py'
Nov 26 12:35:56 compute-0 sudo[70157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:56 compute-0 python3.9[70159]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:35:56 compute-0 sudo[70157]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:56 compute-0 sudo[70312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsmxdksvpmqsgjaamxhumqbsyvnxnfye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160556.6070206-73-180511692039579/AnsiballZ_file.py'
Nov 26 12:35:56 compute-0 sudo[70312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:35:57 compute-0 python3.9[70314]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:35:57 compute-0 sudo[70312]: pam_unix(sudo:session): session closed for user root
Nov 26 12:35:57 compute-0 sshd-session[69237]: Connection closed by 192.168.122.30 port 55574
Nov 26 12:35:57 compute-0 sshd-session[69234]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:35:57 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 26 12:35:57 compute-0 systemd[1]: session-15.scope: Consumed 3.137s CPU time.
Nov 26 12:35:57 compute-0 systemd-logind[777]: Session 15 logged out. Waiting for processes to exit.
Nov 26 12:35:57 compute-0 systemd-logind[777]: Removed session 15.
Nov 26 12:36:01 compute-0 sshd-session[70339]: Accepted publickey for zuul from 192.168.122.30 port 55032 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:36:01 compute-0 systemd-logind[777]: New session 16 of user zuul.
Nov 26 12:36:01 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 26 12:36:01 compute-0 sshd-session[70339]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:36:02 compute-0 python3.9[70492]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:36:03 compute-0 sudo[70646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjvwahwracnxllchaobmbmeeaawczvfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160562.8911412-34-241211754244437/AnsiballZ_setup.py'
Nov 26 12:36:03 compute-0 sudo[70646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:03 compute-0 python3.9[70648]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:36:03 compute-0 sudo[70646]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:03 compute-0 sudo[70730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvyavcfsyjkhdtbakyadgtcfydfsozeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160562.8911412-34-241211754244437/AnsiballZ_dnf.py'
Nov 26 12:36:03 compute-0 sudo[70730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:03 compute-0 python3.9[70732]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 12:36:04 compute-0 sudo[70730]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:05 compute-0 python3.9[70883]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:36:06 compute-0 python3.9[71034]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 12:36:06 compute-0 python3.9[71184]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:36:07 compute-0 python3.9[71334]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:36:07 compute-0 sshd-session[70342]: Connection closed by 192.168.122.30 port 55032
Nov 26 12:36:07 compute-0 sshd-session[70339]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:36:07 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 26 12:36:07 compute-0 systemd[1]: session-16.scope: Consumed 4.185s CPU time.
Nov 26 12:36:07 compute-0 systemd-logind[777]: Session 16 logged out. Waiting for processes to exit.
Nov 26 12:36:07 compute-0 systemd-logind[777]: Removed session 16.
Nov 26 12:36:14 compute-0 sshd-session[71359]: Accepted publickey for zuul from 192.168.26.112 port 43822 ssh2: RSA SHA256:uSHoHww2H0x1DJ3EZPnNe4LJTY0mkFHKbJRE/2eWBow
Nov 26 12:36:14 compute-0 systemd-logind[777]: New session 17 of user zuul.
Nov 26 12:36:14 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 26 12:36:14 compute-0 sshd-session[71359]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:36:14 compute-0 sudo[71435]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vexuylikccdlorjvwplygzemozlhiztw ; /usr/bin/python3'
Nov 26 12:36:14 compute-0 sudo[71435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:14 compute-0 useradd[71439]: new group: name=ceph-admin, GID=42478
Nov 26 12:36:14 compute-0 useradd[71439]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 26 12:36:14 compute-0 sudo[71435]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:14 compute-0 sudo[71521]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtfkifmamxdkfrpvmmrkdqjfwdmywrji ; /usr/bin/python3'
Nov 26 12:36:14 compute-0 sudo[71521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:15 compute-0 sudo[71521]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:15 compute-0 sudo[71594]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wssrvxmqfznjiiflxlsjesffhupolzof ; /usr/bin/python3'
Nov 26 12:36:15 compute-0 sudo[71594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:15 compute-0 sudo[71594]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:15 compute-0 sudo[71644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjboioecteyokjyobcdnxlczbzxejzkv ; /usr/bin/python3'
Nov 26 12:36:15 compute-0 sudo[71644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:15 compute-0 sudo[71644]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:15 compute-0 sudo[71670]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtrhxqilkkxxemmpdfjeduqgrrgzhjlm ; /usr/bin/python3'
Nov 26 12:36:15 compute-0 sudo[71670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:15 compute-0 sudo[71670]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:16 compute-0 sudo[71696]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcijqfxexjtxvsaruoonyyzzygithcvu ; /usr/bin/python3'
Nov 26 12:36:16 compute-0 sudo[71696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:16 compute-0 sudo[71696]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:16 compute-0 sudo[71722]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huyukajdustvrdpkulldlfgeglvftrpj ; /usr/bin/python3'
Nov 26 12:36:16 compute-0 sudo[71722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:16 compute-0 sudo[71722]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:16 compute-0 sudo[71800]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oebpitdoiukezbmcnvkoqpenbarnodbj ; /usr/bin/python3'
Nov 26 12:36:16 compute-0 sudo[71800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:16 compute-0 sudo[71800]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:16 compute-0 sudo[71873]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xazjpjjypgakgupacjcrnbpdrvdqahck ; /usr/bin/python3'
Nov 26 12:36:16 compute-0 sudo[71873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:17 compute-0 sudo[71873]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:17 compute-0 sudo[71975]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grgdasicplkklnhzmxbjutatlmpbtsis ; /usr/bin/python3'
Nov 26 12:36:17 compute-0 sudo[71975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:17 compute-0 sudo[71975]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:17 compute-0 sudo[72048]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzcfbzkstalriyusqxqwttckaqiufnez ; /usr/bin/python3'
Nov 26 12:36:17 compute-0 sudo[72048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:17 compute-0 sudo[72048]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:17 compute-0 sudo[72098]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syjlhbtztekgwosmzlkhraiyvmrlxstm ; /usr/bin/python3'
Nov 26 12:36:17 compute-0 sudo[72098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:18 compute-0 python3[72100]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:36:18 compute-0 sudo[72098]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:19 compute-0 sudo[72189]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqqjrunjvvwynxjoslyikmlejxxnteer ; /usr/bin/python3'
Nov 26 12:36:19 compute-0 sudo[72189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:19 compute-0 python3[72191]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 12:36:20 compute-0 sudo[72189]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:20 compute-0 sudo[72216]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdxuyqvvvzrffqfffdpakqguyciojtdp ; /usr/bin/python3'
Nov 26 12:36:20 compute-0 sudo[72216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:20 compute-0 python3[72218]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 12:36:20 compute-0 sudo[72216]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:20 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 12:36:20 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 12:36:20 compute-0 sudo[72243]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uireqdtdhssznxaezrzonvjbzuosuurh ; /usr/bin/python3'
Nov 26 12:36:20 compute-0 sudo[72243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:20 compute-0 python3[72245]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:36:20 compute-0 kernel: loop: module loaded
Nov 26 12:36:20 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Nov 26 12:36:20 compute-0 sudo[72243]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:20 compute-0 sudo[72277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlrjfsbfrezjwnkzurwzokylwlaankzh ; /usr/bin/python3'
Nov 26 12:36:20 compute-0 sudo[72277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:21 compute-0 python3[72279]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:36:21 compute-0 lvm[72282]: PV /dev/loop3 not used.
Nov 26 12:36:21 compute-0 lvm[72291]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 12:36:21 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 26 12:36:21 compute-0 lvm[72293]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 26 12:36:21 compute-0 sudo[72277]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:21 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 26 12:36:21 compute-0 sudo[72369]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixtnxohcfiydkfeqotdwsxdqseongzag ; /usr/bin/python3'
Nov 26 12:36:21 compute-0 sudo[72369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:21 compute-0 python3[72371]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:36:21 compute-0 sudo[72369]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:21 compute-0 sudo[72442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbcjezjsifswgsxbwdxkgruroijmekoo ; /usr/bin/python3'
Nov 26 12:36:21 compute-0 sudo[72442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:21 compute-0 python3[72444]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160581.3948598-36670-42174178206932/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:36:21 compute-0 sudo[72442]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:22 compute-0 sudo[72492]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anlmeavbxpqlqqmehxfeelozmexhzahn ; /usr/bin/python3'
Nov 26 12:36:22 compute-0 sudo[72492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:22 compute-0 python3[72494]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:36:22 compute-0 systemd[1]: Reloading.
Nov 26 12:36:22 compute-0 systemd-rc-local-generator[72517]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:36:22 compute-0 systemd-sysv-generator[72520]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:36:22 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 26 12:36:22 compute-0 bash[72533]: /dev/loop3: [64513]:4194933 (/var/lib/ceph-osd-0.img)
Nov 26 12:36:22 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 26 12:36:22 compute-0 lvm[72534]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 12:36:22 compute-0 lvm[72534]: VG ceph_vg0 finished
Nov 26 12:36:22 compute-0 sudo[72492]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:22 compute-0 sudo[72558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zppmnjgzubttwuzxdkdolyypngldxytq ; /usr/bin/python3'
Nov 26 12:36:22 compute-0 sudo[72558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:22 compute-0 python3[72560]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 12:36:23 compute-0 sudo[72558]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:23 compute-0 sudo[72585]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iptxcqykkenkitipnmkkgzhwbivmsjhi ; /usr/bin/python3'
Nov 26 12:36:23 compute-0 sudo[72585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:24 compute-0 python3[72587]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 12:36:24 compute-0 sudo[72585]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:24 compute-0 sudo[72611]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxluzdnvqwvrtnkfvqnskwgchxyvqtkv ; /usr/bin/python3'
Nov 26 12:36:24 compute-0 sudo[72611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:24 compute-0 python3[72613]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:36:24 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Nov 26 12:36:24 compute-0 sudo[72611]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:24 compute-0 sudo[72643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnerzsqghqlcxlfjrfoeeagtwgflwnea ; /usr/bin/python3'
Nov 26 12:36:24 compute-0 sudo[72643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:24 compute-0 python3[72645]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:36:24 compute-0 lvm[72648]: PV /dev/loop4 not used.
Nov 26 12:36:24 compute-0 lvm[72658]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 12:36:24 compute-0 sudo[72643]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:24 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 26 12:36:24 compute-0 lvm[72660]:   1 logical volume(s) in volume group "ceph_vg1" now active
Nov 26 12:36:24 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 26 12:36:24 compute-0 sudo[72736]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czrvdoqztbqhvyshufkdsdslbqjvrezm ; /usr/bin/python3'
Nov 26 12:36:24 compute-0 sudo[72736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:25 compute-0 python3[72738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:36:25 compute-0 sudo[72736]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:25 compute-0 sudo[72809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwsdgpdueanmtgcxpuuvsdgwstfhulws ; /usr/bin/python3'
Nov 26 12:36:25 compute-0 sudo[72809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:25 compute-0 python3[72811]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160584.8523064-36697-53067327336241/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:36:25 compute-0 sudo[72809]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:25 compute-0 sudo[72859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhwolqezxgbfwbgsimbnrnrybesukinq ; /usr/bin/python3'
Nov 26 12:36:25 compute-0 sudo[72859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:25 compute-0 python3[72861]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:36:25 compute-0 systemd[1]: Reloading.
Nov 26 12:36:25 compute-0 systemd-sysv-generator[72887]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:36:25 compute-0 systemd-rc-local-generator[72884]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:36:25 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 26 12:36:25 compute-0 bash[72901]: /dev/loop4: [64513]:4194935 (/var/lib/ceph-osd-1.img)
Nov 26 12:36:25 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 26 12:36:25 compute-0 lvm[72902]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 12:36:25 compute-0 lvm[72902]: VG ceph_vg1 finished
Nov 26 12:36:25 compute-0 sudo[72859]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:26 compute-0 sudo[72926]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sownbskvmztcqhmejlovcilolgaebyzc ; /usr/bin/python3'
Nov 26 12:36:26 compute-0 sudo[72926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:26 compute-0 python3[72928]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 12:36:27 compute-0 sudo[72926]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:27 compute-0 sudo[72953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twahwcxqkokpllgroxffciyonfiadgnp ; /usr/bin/python3'
Nov 26 12:36:27 compute-0 sudo[72953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:27 compute-0 python3[72955]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 12:36:27 compute-0 sudo[72953]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:27 compute-0 sudo[72979]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twqrdiudtpzxlzzxtgtabgiproceypvx ; /usr/bin/python3'
Nov 26 12:36:27 compute-0 sudo[72979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:27 compute-0 python3[72981]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:36:27 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Nov 26 12:36:27 compute-0 sudo[72979]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:27 compute-0 sudo[73011]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfiwpkqnbatiuctqylmrdrpfwfkwbfac ; /usr/bin/python3'
Nov 26 12:36:27 compute-0 sudo[73011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:27 compute-0 python3[73013]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:36:27 compute-0 lvm[73016]: PV /dev/loop5 not used.
Nov 26 12:36:27 compute-0 lvm[73026]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 12:36:27 compute-0 sudo[73011]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:27 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 26 12:36:28 compute-0 lvm[73028]:   1 logical volume(s) in volume group "ceph_vg2" now active
Nov 26 12:36:28 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 26 12:36:28 compute-0 sudo[73104]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-appervdnodwaidhhzzmmxjfoqmtewrcj ; /usr/bin/python3'
Nov 26 12:36:28 compute-0 sudo[73104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:28 compute-0 python3[73106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:36:28 compute-0 sudo[73104]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:28 compute-0 sudo[73177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkqsuznqldqtmryuxnincvdgfpwterse ; /usr/bin/python3'
Nov 26 12:36:28 compute-0 sudo[73177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:28 compute-0 python3[73179]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160588.080042-36724-11163864992032/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:36:28 compute-0 sudo[73177]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:28 compute-0 sudo[73227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mimvedzblcguehgitxxgyhxvgbcopqxg ; /usr/bin/python3'
Nov 26 12:36:28 compute-0 sudo[73227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:28 compute-0 python3[73229]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:36:28 compute-0 systemd[1]: Reloading.
Nov 26 12:36:28 compute-0 systemd-rc-local-generator[73252]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:36:28 compute-0 systemd-sysv-generator[73255]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:36:29 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 26 12:36:29 compute-0 bash[73268]: /dev/loop5: [64513]:4194939 (/var/lib/ceph-osd-2.img)
Nov 26 12:36:29 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 26 12:36:29 compute-0 lvm[73269]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 12:36:29 compute-0 lvm[73269]: VG ceph_vg2 finished
Nov 26 12:36:29 compute-0 sudo[73227]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:30 compute-0 python3[73293]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:36:32 compute-0 sudo[73384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjrqcwhbtsoazmsainpgukaqormgipwb ; /usr/bin/python3'
Nov 26 12:36:32 compute-0 sudo[73384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:32 compute-0 python3[73386]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 12:36:33 compute-0 groupadd[73392]: group added to /etc/group: name=cephadm, GID=992
Nov 26 12:36:33 compute-0 groupadd[73392]: group added to /etc/gshadow: name=cephadm
Nov 26 12:36:33 compute-0 groupadd[73392]: new group: name=cephadm, GID=992
Nov 26 12:36:33 compute-0 useradd[73399]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Nov 26 12:36:33 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 12:36:33 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 12:36:33 compute-0 sudo[73384]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 12:36:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 12:36:33 compute-0 systemd[1]: run-rd4d1e30ccdbe457db6dbf1d17ce5c515.service: Deactivated successfully.
Nov 26 12:36:33 compute-0 sudo[73495]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhtvqammeibfgyxklijpeyqtuznaxwio ; /usr/bin/python3'
Nov 26 12:36:33 compute-0 sudo[73495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:33 compute-0 python3[73497]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 12:36:34 compute-0 sudo[73495]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:34 compute-0 sudo[73523]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-attdyzpnbsiurofxlzgqquvvvrdvgnrc ; /usr/bin/python3'
Nov 26 12:36:34 compute-0 sudo[73523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:34 compute-0 python3[73525]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:36:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 12:36:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 12:36:34 compute-0 sudo[73523]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:34 compute-0 sudo[73581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-butrmxfeyqufmltqpbojungxlccnhtyu ; /usr/bin/python3'
Nov 26 12:36:34 compute-0 sudo[73581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:34 compute-0 python3[73583]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:36:34 compute-0 sudo[73581]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:34 compute-0 sudo[73607]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wecldnvorfsudvmkggeyromwvbkekhmf ; /usr/bin/python3'
Nov 26 12:36:34 compute-0 sudo[73607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:35 compute-0 python3[73609]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:36:35 compute-0 sudo[73607]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 12:36:35 compute-0 sudo[73685]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsstafzmtjgilspdergdvgqyrsaqobrv ; /usr/bin/python3'
Nov 26 12:36:35 compute-0 sudo[73685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:35 compute-0 python3[73687]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:36:35 compute-0 sudo[73685]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:35 compute-0 sudo[73758]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhdeyjzzlkuamvvypughgayrxdhzqtga ; /usr/bin/python3'
Nov 26 12:36:35 compute-0 sudo[73758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:35 compute-0 python3[73760]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160595.3499851-36871-105810088639733/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:36:35 compute-0 sudo[73758]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:36 compute-0 sudo[73860]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whvbgmgrneyvcpngknkikbnyrwvsunay ; /usr/bin/python3'
Nov 26 12:36:36 compute-0 sudo[73860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:36 compute-0 python3[73862]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:36:36 compute-0 sudo[73860]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:36 compute-0 sudo[73933]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyqydyyxhrwozzrhnegessncoxobzrem ; /usr/bin/python3'
Nov 26 12:36:36 compute-0 sudo[73933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:36 compute-0 python3[73935]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160596.118687-36889-259583515668316/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:36:36 compute-0 sudo[73933]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:36 compute-0 sudo[73983]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uphfysfvtkahcczduddhcawnsicvvjzv ; /usr/bin/python3'
Nov 26 12:36:36 compute-0 sudo[73983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:36 compute-0 python3[73985]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 12:36:36 compute-0 sudo[73983]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:36 compute-0 sudo[74011]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajzixvmlxgjlaawgpqatgvjabxlpnhug ; /usr/bin/python3'
Nov 26 12:36:36 compute-0 sudo[74011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:37 compute-0 python3[74013]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 12:36:37 compute-0 sudo[74011]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:37 compute-0 sudo[74039]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohutdqldmeymhyxltbadlcmiyqcbqbaw ; /usr/bin/python3'
Nov 26 12:36:37 compute-0 sudo[74039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:37 compute-0 python3[74041]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 12:36:37 compute-0 sudo[74039]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:37 compute-0 sudo[74067]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpdmajqeimqusikafjtjaogdxbpxxady ; /usr/bin/python3'
Nov 26 12:36:37 compute-0 sudo[74067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:36:37 compute-0 python3[74069]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 12:36:37 compute-0 sshd-session[74082]: Accepted publickey for ceph-admin from 192.168.122.100 port 34074 ssh2: RSA SHA256:u+oi91Se3Z6qNLfJgM2if+islPXdtJdild13071S1x0
Nov 26 12:36:37 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 26 12:36:37 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 26 12:36:37 compute-0 systemd-logind[777]: New session 18 of user ceph-admin.
Nov 26 12:36:37 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 26 12:36:37 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 26 12:36:37 compute-0 systemd[74086]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 12:36:37 compute-0 systemd[74086]: Queued start job for default target Main User Target.
Nov 26 12:36:37 compute-0 systemd[74086]: Created slice User Application Slice.
Nov 26 12:36:37 compute-0 systemd[74086]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 26 12:36:37 compute-0 systemd[74086]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 12:36:37 compute-0 systemd[74086]: Reached target Paths.
Nov 26 12:36:37 compute-0 systemd[74086]: Reached target Timers.
Nov 26 12:36:37 compute-0 systemd[74086]: Starting D-Bus User Message Bus Socket...
Nov 26 12:36:37 compute-0 systemd[74086]: Starting Create User's Volatile Files and Directories...
Nov 26 12:36:37 compute-0 systemd[74086]: Listening on D-Bus User Message Bus Socket.
Nov 26 12:36:37 compute-0 systemd[74086]: Reached target Sockets.
Nov 26 12:36:37 compute-0 systemd[74086]: Finished Create User's Volatile Files and Directories.
Nov 26 12:36:37 compute-0 systemd[74086]: Reached target Basic System.
Nov 26 12:36:37 compute-0 systemd[74086]: Reached target Main User Target.
Nov 26 12:36:37 compute-0 systemd[74086]: Startup finished in 89ms.
Nov 26 12:36:37 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 26 12:36:37 compute-0 systemd[1]: Started Session 18 of User ceph-admin.
Nov 26 12:36:37 compute-0 sshd-session[74082]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 12:36:37 compute-0 sudo[74103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Nov 26 12:36:37 compute-0 sudo[74103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:36:37 compute-0 sudo[74103]: pam_unix(sudo:session): session closed for user root
Nov 26 12:36:37 compute-0 sshd-session[74102]: Received disconnect from 192.168.122.100 port 34074:11: disconnected by user
Nov 26 12:36:37 compute-0 sshd-session[74102]: Disconnected from user ceph-admin 192.168.122.100 port 34074
Nov 26 12:36:37 compute-0 sshd-session[74082]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 26 12:36:37 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 26 12:36:37 compute-0 systemd-logind[777]: Session 18 logged out. Waiting for processes to exit.
Nov 26 12:36:37 compute-0 systemd-logind[777]: Removed session 18.
Nov 26 12:36:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat587868829-lower\x2dmapped.mount: Deactivated successfully.
Nov 26 12:36:48 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 26 12:36:48 compute-0 systemd[74086]: Activating special unit Exit the Session...
Nov 26 12:36:48 compute-0 systemd[74086]: Stopped target Main User Target.
Nov 26 12:36:48 compute-0 systemd[74086]: Stopped target Basic System.
Nov 26 12:36:48 compute-0 systemd[74086]: Stopped target Paths.
Nov 26 12:36:48 compute-0 systemd[74086]: Stopped target Sockets.
Nov 26 12:36:48 compute-0 systemd[74086]: Stopped target Timers.
Nov 26 12:36:48 compute-0 systemd[74086]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 26 12:36:48 compute-0 systemd[74086]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 26 12:36:48 compute-0 systemd[74086]: Closed D-Bus User Message Bus Socket.
Nov 26 12:36:48 compute-0 systemd[74086]: Stopped Create User's Volatile Files and Directories.
Nov 26 12:36:48 compute-0 systemd[74086]: Removed slice User Application Slice.
Nov 26 12:36:48 compute-0 systemd[74086]: Reached target Shutdown.
Nov 26 12:36:48 compute-0 systemd[74086]: Finished Exit the Session.
Nov 26 12:36:48 compute-0 systemd[74086]: Reached target Exit the Session.
Nov 26 12:36:48 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 26 12:36:48 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 26 12:36:48 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 26 12:36:48 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 26 12:36:48 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 26 12:36:48 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 26 12:36:48 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 26 12:36:51 compute-0 podman[74140]: 2025-11-26 12:36:51.529388351 +0000 UTC m=+13.534072195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 12:36:51 compute-0 podman[74189]: 2025-11-26 12:36:51.576709682 +0000 UTC m=+0.029980394 container create 5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9 (image=quay.io/ceph/ceph:v18, name=modest_lewin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 12:36:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck3259280976-merged.mount: Deactivated successfully.
Nov 26 12:36:51 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 26 12:36:51 compute-0 systemd[1]: Started libpod-conmon-5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9.scope.
Nov 26 12:36:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:36:51 compute-0 podman[74189]: 2025-11-26 12:36:51.639059281 +0000 UTC m=+0.092330013 container init 5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9 (image=quay.io/ceph/ceph:v18, name=modest_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 12:36:51 compute-0 podman[74189]: 2025-11-26 12:36:51.644408391 +0000 UTC m=+0.097679103 container start 5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9 (image=quay.io/ceph/ceph:v18, name=modest_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 12:36:51 compute-0 podman[74189]: 2025-11-26 12:36:51.645345285 +0000 UTC m=+0.098615998 container attach 5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9 (image=quay.io/ceph/ceph:v18, name=modest_lewin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 12:36:51 compute-0 podman[74189]: 2025-11-26 12:36:51.563778705 +0000 UTC m=+0.017049437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:51 compute-0 modest_lewin[74202]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 26 12:36:51 compute-0 systemd[1]: libpod-5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9.scope: Deactivated successfully.
Nov 26 12:36:51 compute-0 podman[74189]: 2025-11-26 12:36:51.895637856 +0000 UTC m=+0.348908568 container died 5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9 (image=quay.io/ceph/ceph:v18, name=modest_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:36:51 compute-0 podman[74189]: 2025-11-26 12:36:51.91909607 +0000 UTC m=+0.372366781 container remove 5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9 (image=quay.io/ceph/ceph:v18, name=modest_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 12:36:51 compute-0 systemd[1]: libpod-conmon-5588238d63ba7bcae7a1c8e5d8cf7c6ab9de211978922fc59935c5c6672017d9.scope: Deactivated successfully.
Nov 26 12:36:51 compute-0 podman[74217]: 2025-11-26 12:36:51.960555795 +0000 UTC m=+0.026568025 container create 148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda (image=quay.io/ceph/ceph:v18, name=practical_moser, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 12:36:51 compute-0 systemd[1]: Started libpod-conmon-148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda.scope.
Nov 26 12:36:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:36:51 compute-0 podman[74217]: 2025-11-26 12:36:51.997156614 +0000 UTC m=+0.063168854 container init 148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda (image=quay.io/ceph/ceph:v18, name=practical_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 12:36:52 compute-0 podman[74217]: 2025-11-26 12:36:52.001293699 +0000 UTC m=+0.067305919 container start 148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda (image=quay.io/ceph/ceph:v18, name=practical_moser, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:36:52 compute-0 podman[74217]: 2025-11-26 12:36:52.002547202 +0000 UTC m=+0.068559432 container attach 148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda (image=quay.io/ceph/ceph:v18, name=practical_moser, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:36:52 compute-0 practical_moser[74230]: 167 167
Nov 26 12:36:52 compute-0 systemd[1]: libpod-148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda.scope: Deactivated successfully.
Nov 26 12:36:52 compute-0 podman[74217]: 2025-11-26 12:36:52.003914448 +0000 UTC m=+0.069926668 container died 148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda (image=quay.io/ceph/ceph:v18, name=practical_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:36:52 compute-0 podman[74217]: 2025-11-26 12:36:52.018712082 +0000 UTC m=+0.084724302 container remove 148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda (image=quay.io/ceph/ceph:v18, name=practical_moser, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:36:52 compute-0 podman[74217]: 2025-11-26 12:36:51.949772705 +0000 UTC m=+0.015784946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:52 compute-0 systemd[1]: libpod-conmon-148a4c1411d39c92ce065d462a21c67eb71f14acf4ea67e415d3177bd6b91fda.scope: Deactivated successfully.
Nov 26 12:36:52 compute-0 podman[74244]: 2025-11-26 12:36:52.060877376 +0000 UTC m=+0.028322612 container create 08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362 (image=quay.io/ceph/ceph:v18, name=eloquent_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 12:36:52 compute-0 systemd[1]: Started libpod-conmon-08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362.scope.
Nov 26 12:36:52 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:36:52 compute-0 podman[74244]: 2025-11-26 12:36:52.100975766 +0000 UTC m=+0.068421011 container init 08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362 (image=quay.io/ceph/ceph:v18, name=eloquent_dirac, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 12:36:52 compute-0 podman[74244]: 2025-11-26 12:36:52.105028402 +0000 UTC m=+0.072473638 container start 08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362 (image=quay.io/ceph/ceph:v18, name=eloquent_dirac, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:36:52 compute-0 podman[74244]: 2025-11-26 12:36:52.106129547 +0000 UTC m=+0.073574782 container attach 08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362 (image=quay.io/ceph/ceph:v18, name=eloquent_dirac, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:36:52 compute-0 eloquent_dirac[74257]: AQBk9CZp4cAuBxAAgg2KySB/jebvKIranhCLbw==
Nov 26 12:36:52 compute-0 systemd[1]: libpod-08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362.scope: Deactivated successfully.
Nov 26 12:36:52 compute-0 podman[74244]: 2025-11-26 12:36:52.122644888 +0000 UTC m=+0.090090123 container died 08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362 (image=quay.io/ceph/ceph:v18, name=eloquent_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 12:36:52 compute-0 podman[74244]: 2025-11-26 12:36:52.13734036 +0000 UTC m=+0.104785595 container remove 08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362 (image=quay.io/ceph/ceph:v18, name=eloquent_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 12:36:52 compute-0 podman[74244]: 2025-11-26 12:36:52.049391743 +0000 UTC m=+0.016836978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:52 compute-0 systemd[1]: libpod-conmon-08214e560029fef339794ca7bc3622528f4e72efb8cf0483ad3bf39ee0e4f362.scope: Deactivated successfully.
Nov 26 12:36:52 compute-0 podman[74271]: 2025-11-26 12:36:52.176592393 +0000 UTC m=+0.026125130 container create 8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f (image=quay.io/ceph/ceph:v18, name=sweet_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 12:36:52 compute-0 systemd[1]: Started libpod-conmon-8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f.scope.
Nov 26 12:36:52 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:36:52 compute-0 podman[74271]: 2025-11-26 12:36:52.214867518 +0000 UTC m=+0.064400274 container init 8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f (image=quay.io/ceph/ceph:v18, name=sweet_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:36:52 compute-0 podman[74271]: 2025-11-26 12:36:52.220431062 +0000 UTC m=+0.069963798 container start 8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f (image=quay.io/ceph/ceph:v18, name=sweet_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 12:36:52 compute-0 podman[74271]: 2025-11-26 12:36:52.221612408 +0000 UTC m=+0.071145144 container attach 8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f (image=quay.io/ceph/ceph:v18, name=sweet_ardinghelli, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:36:52 compute-0 sweet_ardinghelli[74287]: AQBk9CZpikwEDhAAdJSpQN1kvWuQUK/eKTCybg==
Nov 26 12:36:52 compute-0 systemd[1]: libpod-8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f.scope: Deactivated successfully.
Nov 26 12:36:52 compute-0 podman[74271]: 2025-11-26 12:36:52.237452246 +0000 UTC m=+0.086984982 container died 8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f (image=quay.io/ceph/ceph:v18, name=sweet_ardinghelli, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:36:52 compute-0 podman[74271]: 2025-11-26 12:36:52.252578599 +0000 UTC m=+0.102111336 container remove 8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f (image=quay.io/ceph/ceph:v18, name=sweet_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:36:52 compute-0 podman[74271]: 2025-11-26 12:36:52.166323023 +0000 UTC m=+0.015855779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:52 compute-0 systemd[1]: libpod-conmon-8003a8f8c668d8726d8c56403fb1a53e07b0984d511e3bfa7b3e50b1d981d34f.scope: Deactivated successfully.
Nov 26 12:36:52 compute-0 podman[74302]: 2025-11-26 12:36:52.295454963 +0000 UTC m=+0.028669404 container create 89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b (image=quay.io/ceph/ceph:v18, name=sad_poitras, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:36:52 compute-0 systemd[1]: Started libpod-conmon-89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b.scope.
Nov 26 12:36:52 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:36:52 compute-0 podman[74302]: 2025-11-26 12:36:52.329934705 +0000 UTC m=+0.063149167 container init 89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b (image=quay.io/ceph/ceph:v18, name=sad_poitras, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:36:52 compute-0 podman[74302]: 2025-11-26 12:36:52.333894647 +0000 UTC m=+0.067109089 container start 89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b (image=quay.io/ceph/ceph:v18, name=sad_poitras, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:36:52 compute-0 podman[74302]: 2025-11-26 12:36:52.334990453 +0000 UTC m=+0.068204904 container attach 89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b (image=quay.io/ceph/ceph:v18, name=sad_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:36:52 compute-0 sad_poitras[74319]: AQBk9CZp7pXKFBAA12r1szxr0Tbew8rIVdADTA==
Nov 26 12:36:52 compute-0 systemd[1]: libpod-89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b.scope: Deactivated successfully.
Nov 26 12:36:52 compute-0 podman[74302]: 2025-11-26 12:36:52.350980293 +0000 UTC m=+0.084194734 container died 89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b (image=quay.io/ceph/ceph:v18, name=sad_poitras, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 12:36:52 compute-0 podman[74302]: 2025-11-26 12:36:52.36705365 +0000 UTC m=+0.100268092 container remove 89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b (image=quay.io/ceph/ceph:v18, name=sad_poitras, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 12:36:52 compute-0 podman[74302]: 2025-11-26 12:36:52.284219682 +0000 UTC m=+0.017434133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:52 compute-0 systemd[1]: libpod-conmon-89efebb3a7f6fdbe364bd9fd93f1109f070c79d12bc11230c4de723706e8213b.scope: Deactivated successfully.
Nov 26 12:36:52 compute-0 podman[74336]: 2025-11-26 12:36:52.410091058 +0000 UTC m=+0.027888894 container create 1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:36:52 compute-0 systemd[1]: Started libpod-conmon-1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32.scope.
Nov 26 12:36:52 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:36:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59a3711e846c79c6a8a16683a4732f85f4452140d4e5abca66938bded86a14b/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:52 compute-0 podman[74336]: 2025-11-26 12:36:52.452505924 +0000 UTC m=+0.070303769 container init 1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:36:52 compute-0 podman[74336]: 2025-11-26 12:36:52.456641015 +0000 UTC m=+0.074438851 container start 1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 26 12:36:52 compute-0 podman[74336]: 2025-11-26 12:36:52.457729687 +0000 UTC m=+0.075527522 container attach 1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:36:52 compute-0 wonderful_buck[74349]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 26 12:36:52 compute-0 wonderful_buck[74349]: setting min_mon_release = pacific
Nov 26 12:36:52 compute-0 wonderful_buck[74349]: /usr/bin/monmaptool: set fsid to f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:36:52 compute-0 wonderful_buck[74349]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 26 12:36:52 compute-0 systemd[1]: libpod-1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32.scope: Deactivated successfully.
Nov 26 12:36:52 compute-0 podman[74336]: 2025-11-26 12:36:52.479607893 +0000 UTC m=+0.097405729 container died 1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 12:36:52 compute-0 podman[74336]: 2025-11-26 12:36:52.495566575 +0000 UTC m=+0.113364410 container remove 1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32 (image=quay.io/ceph/ceph:v18, name=wonderful_buck, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:36:52 compute-0 podman[74336]: 2025-11-26 12:36:52.398925579 +0000 UTC m=+0.016723434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:52 compute-0 systemd[1]: libpod-conmon-1324feacc003d029f63156c2c8f2182f24b52cee524910dc10f39a5a182bac32.scope: Deactivated successfully.
Nov 26 12:36:52 compute-0 podman[74365]: 2025-11-26 12:36:52.539463964 +0000 UTC m=+0.028021785 container create 8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608 (image=quay.io/ceph/ceph:v18, name=interesting_nash, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 26 12:36:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-931fbe0a52ff430b9fff5ae8c2d892c45edc17a5e7017fb19ac58ef2482437cf-merged.mount: Deactivated successfully.
Nov 26 12:36:52 compute-0 systemd[1]: Started libpod-conmon-8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608.scope.
Nov 26 12:36:52 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:36:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53642fc10fbb75cd2484c7431343462e8c7e6ad70d9f9eb433aaeda8e50b60b/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53642fc10fbb75cd2484c7431343462e8c7e6ad70d9f9eb433aaeda8e50b60b/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53642fc10fbb75cd2484c7431343462e8c7e6ad70d9f9eb433aaeda8e50b60b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53642fc10fbb75cd2484c7431343462e8c7e6ad70d9f9eb433aaeda8e50b60b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:52 compute-0 podman[74365]: 2025-11-26 12:36:52.5883166 +0000 UTC m=+0.076874431 container init 8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608 (image=quay.io/ceph/ceph:v18, name=interesting_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:36:52 compute-0 podman[74365]: 2025-11-26 12:36:52.592112873 +0000 UTC m=+0.080670694 container start 8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608 (image=quay.io/ceph/ceph:v18, name=interesting_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 12:36:52 compute-0 podman[74365]: 2025-11-26 12:36:52.593127705 +0000 UTC m=+0.081685536 container attach 8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608 (image=quay.io/ceph/ceph:v18, name=interesting_nash, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 12:36:52 compute-0 podman[74365]: 2025-11-26 12:36:52.528103977 +0000 UTC m=+0.016661818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:52 compute-0 systemd[1]: libpod-8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608.scope: Deactivated successfully.
Nov 26 12:36:52 compute-0 podman[74365]: 2025-11-26 12:36:52.632578444 +0000 UTC m=+0.121136265 container died 8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608 (image=quay.io/ceph/ceph:v18, name=interesting_nash, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:36:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f53642fc10fbb75cd2484c7431343462e8c7e6ad70d9f9eb433aaeda8e50b60b-merged.mount: Deactivated successfully.
Nov 26 12:36:52 compute-0 podman[74365]: 2025-11-26 12:36:52.6480194 +0000 UTC m=+0.136577221 container remove 8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608 (image=quay.io/ceph/ceph:v18, name=interesting_nash, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 12:36:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 12:36:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 12:36:52 compute-0 systemd[1]: libpod-conmon-8797bbb9ba7b0f2ca1e9d46f22773c883488e9ae10936cf4f59923bbbac05608.scope: Deactivated successfully.
Nov 26 12:36:52 compute-0 systemd[1]: Reloading.
Nov 26 12:36:52 compute-0 systemd-rc-local-generator[74438]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:36:52 compute-0 systemd-sysv-generator[74441]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:36:52 compute-0 systemd[1]: Reloading.
Nov 26 12:36:52 compute-0 systemd-rc-local-generator[74473]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:36:52 compute-0 systemd-sysv-generator[74476]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:36:53 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Nov 26 12:36:53 compute-0 systemd[1]: Reloading.
Nov 26 12:36:53 compute-0 systemd-rc-local-generator[74511]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:36:53 compute-0 systemd-sysv-generator[74515]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:36:53 compute-0 systemd[1]: Reached target Ceph cluster f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 12:36:53 compute-0 systemd[1]: Reloading.
Nov 26 12:36:53 compute-0 systemd-sysv-generator[74554]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:36:53 compute-0 systemd-rc-local-generator[74551]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:36:53 compute-0 systemd[1]: Reloading.
Nov 26 12:36:53 compute-0 systemd-rc-local-generator[74588]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:36:53 compute-0 systemd-sysv-generator[74591]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:36:53 compute-0 systemd[1]: Created slice Slice /system/ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 12:36:53 compute-0 systemd[1]: Reached target System Time Set.
Nov 26 12:36:53 compute-0 systemd[1]: Reached target System Time Synchronized.
Nov 26 12:36:53 compute-0 systemd[1]: Starting Ceph mon.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 12:36:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 12:36:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 12:36:53 compute-0 podman[74643]: 2025-11-26 12:36:53.829444606 +0000 UTC m=+0.026843515 container create dbc7bfa56c05965b50c5f72b9ecc884eef99bde2350df7b1e35e6cb0197d6d6e (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ddbe35422285337201c75b341abbc4f716cb469c9e55edb3b7035d51f06188/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ddbe35422285337201c75b341abbc4f716cb469c9e55edb3b7035d51f06188/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ddbe35422285337201c75b341abbc4f716cb469c9e55edb3b7035d51f06188/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ddbe35422285337201c75b341abbc4f716cb469c9e55edb3b7035d51f06188/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:53 compute-0 podman[74643]: 2025-11-26 12:36:53.871868157 +0000 UTC m=+0.069267056 container init dbc7bfa56c05965b50c5f72b9ecc884eef99bde2350df7b1e35e6cb0197d6d6e (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:36:53 compute-0 podman[74643]: 2025-11-26 12:36:53.876908744 +0000 UTC m=+0.074307644 container start dbc7bfa56c05965b50c5f72b9ecc884eef99bde2350df7b1e35e6cb0197d6d6e (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 12:36:53 compute-0 bash[74643]: dbc7bfa56c05965b50c5f72b9ecc884eef99bde2350df7b1e35e6cb0197d6d6e
Nov 26 12:36:53 compute-0 podman[74643]: 2025-11-26 12:36:53.817277648 +0000 UTC m=+0.014676567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:53 compute-0 systemd[1]: Started Ceph mon.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 12:36:53 compute-0 ceph-mon[74659]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 12:36:53 compute-0 ceph-mon[74659]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 26 12:36:53 compute-0 ceph-mon[74659]: pidfile_write: ignore empty --pid-file
Nov 26 12:36:53 compute-0 ceph-mon[74659]: load: jerasure load: lrc 
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: RocksDB version: 7.9.2
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Git sha 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: DB SUMMARY
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: DB Session ID:  YBP93YZ1IQGH1EXX8KK1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: CURRENT file:  CURRENT
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                         Options.error_if_exists: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                       Options.create_if_missing: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                                     Options.env: 0x55c901bffc40
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                                Options.info_log: 0x55c903c6ce80
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                              Options.statistics: (nil)
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                               Options.use_fsync: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                              Options.db_log_dir: 
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                                 Options.wal_dir: 
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                    Options.write_buffer_manager: 0x55c903c7cb40
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                  Options.unordered_write: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                               Options.row_cache: None
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                              Options.wal_filter: None
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.two_write_queues: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.wal_compression: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.atomic_flush: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.max_background_jobs: 2
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.max_background_compactions: -1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.max_subcompactions: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.max_total_wal_size: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                          Options.max_open_files: -1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:       Options.compaction_readahead_size: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Compression algorithms supported:
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         kZSTD supported: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         kXpressCompression supported: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         kBZip2Compression supported: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         kLZ4Compression supported: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         kZlibCompression supported: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         kSnappyCompression supported: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:           Options.merge_operator: 
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:        Options.compaction_filter: None
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c903c6ca80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c903c651f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:        Options.write_buffer_size: 33554432
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:  Options.max_write_buffer_number: 2
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:          Options.compression: NoCompression
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.num_levels: 7
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 360f285c-8dc8-4f98-b8a2-efdebada3f64
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160613908101, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160613908845, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160613, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "YBP93YZ1IQGH1EXX8KK1", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160613908921, "job": 1, "event": "recovery_finished"}
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c903c8ee00
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: DB pointer 0x55c903d18000
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 12:36:53 compute-0 ceph-mon[74659]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.34 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.34 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c903c651f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 26 12:36:53 compute-0 ceph-mon[74659]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@-1(???) e0 preinit fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 26 12:36:53 compute-0 ceph-mon[74659]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 12:36:53 compute-0 ceph-mon[74659]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 26 12:36:53 compute-0 ceph-mon[74659]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 12:36:53 compute-0 ceph-mon[74659]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 12:36:53 compute-0 ceph-mon[74659]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC 7763 64-Core Processor,created_at=2025-11-26T12:36:52.617229Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:04:00.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865364,os=Linux}
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).mds e1 new map
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 26 12:36:53 compute-0 ceph-mon[74659]: log_channel(cluster) log [DBG] : fsmap 
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mkfs f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 12:36:53 compute-0 podman[74660]: 2025-11-26 12:36:53.930829039 +0000 UTC m=+0.032652549 container create 3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da (image=quay.io/ceph/ceph:v18, name=pensive_goldwasser, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 26 12:36:53 compute-0 ceph-mon[74659]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 26 12:36:53 compute-0 ceph-mon[74659]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 26 12:36:53 compute-0 ceph-mon[74659]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 12:36:53 compute-0 systemd[1]: Started libpod-conmon-3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da.scope.
Nov 26 12:36:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c0462980420487a1263cd58b785caf21a67177a511149b33ae6e2b95d2f957/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c0462980420487a1263cd58b785caf21a67177a511149b33ae6e2b95d2f957/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c0462980420487a1263cd58b785caf21a67177a511149b33ae6e2b95d2f957/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:53 compute-0 podman[74660]: 2025-11-26 12:36:53.988847687 +0000 UTC m=+0.090671206 container init 3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da (image=quay.io/ceph/ceph:v18, name=pensive_goldwasser, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 12:36:53 compute-0 podman[74660]: 2025-11-26 12:36:53.993058923 +0000 UTC m=+0.094882432 container start 3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da (image=quay.io/ceph/ceph:v18, name=pensive_goldwasser, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 12:36:53 compute-0 podman[74660]: 2025-11-26 12:36:53.994069899 +0000 UTC m=+0.095893407 container attach 3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da (image=quay.io/ceph/ceph:v18, name=pensive_goldwasser, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 26 12:36:54 compute-0 podman[74660]: 2025-11-26 12:36:53.91729086 +0000 UTC m=+0.019114389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:54 compute-0 ceph-mon[74659]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 26 12:36:54 compute-0 ceph-mon[74659]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221525536' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 26 12:36:54 compute-0 pensive_goldwasser[74711]:   cluster:
Nov 26 12:36:54 compute-0 pensive_goldwasser[74711]:     id:     f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:36:54 compute-0 pensive_goldwasser[74711]:     health: HEALTH_OK
Nov 26 12:36:54 compute-0 pensive_goldwasser[74711]:  
Nov 26 12:36:54 compute-0 pensive_goldwasser[74711]:   services:
Nov 26 12:36:54 compute-0 pensive_goldwasser[74711]:     mon: 1 daemons, quorum compute-0 (age 0.391389s)
Nov 26 12:36:54 compute-0 pensive_goldwasser[74711]:     mgr: no daemons active
Nov 26 12:36:54 compute-0 pensive_goldwasser[74711]:     osd: 0 osds: 0 up, 0 in
Nov 26 12:36:54 compute-0 pensive_goldwasser[74711]:  
Nov 26 12:36:54 compute-0 pensive_goldwasser[74711]:   data:
Nov 26 12:36:54 compute-0 pensive_goldwasser[74711]:     pools:   0 pools, 0 pgs
Nov 26 12:36:54 compute-0 pensive_goldwasser[74711]:     objects: 0 objects, 0 B
Nov 26 12:36:54 compute-0 pensive_goldwasser[74711]:     usage:   0 B used, 0 B / 0 B avail
Nov 26 12:36:54 compute-0 pensive_goldwasser[74711]:     pgs:     
Nov 26 12:36:54 compute-0 pensive_goldwasser[74711]:  
Nov 26 12:36:54 compute-0 systemd[1]: libpod-3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da.scope: Deactivated successfully.
Nov 26 12:36:54 compute-0 podman[74660]: 2025-11-26 12:36:54.33178796 +0000 UTC m=+0.433611468 container died 3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da (image=quay.io/ceph/ceph:v18, name=pensive_goldwasser, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:36:54 compute-0 podman[74660]: 2025-11-26 12:36:54.353701312 +0000 UTC m=+0.455524821 container remove 3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da (image=quay.io/ceph/ceph:v18, name=pensive_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 12:36:54 compute-0 systemd[1]: libpod-conmon-3dd195acac6e03113b31c8ead253800a066fc49f6f56de6160579eec019908da.scope: Deactivated successfully.
Nov 26 12:36:54 compute-0 podman[74747]: 2025-11-26 12:36:54.39153831 +0000 UTC m=+0.023944440 container create fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd (image=quay.io/ceph/ceph:v18, name=vigorous_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:36:54 compute-0 systemd[1]: Started libpod-conmon-fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd.scope.
Nov 26 12:36:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cf7a15c6d5798da95aa689c8b533a72676bb742a19c5c50284b9e53c55cd53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cf7a15c6d5798da95aa689c8b533a72676bb742a19c5c50284b9e53c55cd53/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cf7a15c6d5798da95aa689c8b533a72676bb742a19c5c50284b9e53c55cd53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cf7a15c6d5798da95aa689c8b533a72676bb742a19c5c50284b9e53c55cd53/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:54 compute-0 podman[74747]: 2025-11-26 12:36:54.43843014 +0000 UTC m=+0.070836281 container init fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd (image=quay.io/ceph/ceph:v18, name=vigorous_yonath, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:36:54 compute-0 podman[74747]: 2025-11-26 12:36:54.442678456 +0000 UTC m=+0.075084577 container start fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd (image=quay.io/ceph/ceph:v18, name=vigorous_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 12:36:54 compute-0 podman[74747]: 2025-11-26 12:36:54.443866094 +0000 UTC m=+0.076272215 container attach fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd (image=quay.io/ceph/ceph:v18, name=vigorous_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:36:54 compute-0 podman[74747]: 2025-11-26 12:36:54.381794038 +0000 UTC m=+0.014200178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:54 compute-0 ceph-mon[74659]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 26 12:36:54 compute-0 ceph-mon[74659]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/24887754' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 12:36:54 compute-0 ceph-mon[74659]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/24887754' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 12:36:54 compute-0 vigorous_yonath[74761]: 
Nov 26 12:36:54 compute-0 vigorous_yonath[74761]: [global]
Nov 26 12:36:54 compute-0 vigorous_yonath[74761]:         fsid = f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:36:54 compute-0 vigorous_yonath[74761]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 26 12:36:54 compute-0 vigorous_yonath[74761]:         osd_crush_chooseleaf_type = 0
Nov 26 12:36:54 compute-0 systemd[1]: libpod-fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd.scope: Deactivated successfully.
Nov 26 12:36:54 compute-0 podman[74787]: 2025-11-26 12:36:54.790951797 +0000 UTC m=+0.017010485 container died fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd (image=quay.io/ceph/ceph:v18, name=vigorous_yonath, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:36:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0cf7a15c6d5798da95aa689c8b533a72676bb742a19c5c50284b9e53c55cd53-merged.mount: Deactivated successfully.
Nov 26 12:36:54 compute-0 podman[74787]: 2025-11-26 12:36:54.809399219 +0000 UTC m=+0.035457897 container remove fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd (image=quay.io/ceph/ceph:v18, name=vigorous_yonath, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:36:54 compute-0 systemd[1]: libpod-conmon-fc10f54b58bf7e44b173958409fc3728da00ffacbe6ea3ef6913c4c1027d43bd.scope: Deactivated successfully.
Nov 26 12:36:54 compute-0 podman[74798]: 2025-11-26 12:36:54.852041852 +0000 UTC m=+0.025910615 container create 3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:36:54 compute-0 systemd[1]: Started libpod-conmon-3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d.scope.
Nov 26 12:36:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476ddfe87045a6536ba27bb0746362643d20d3452b98e22e85ca253d1f458492/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476ddfe87045a6536ba27bb0746362643d20d3452b98e22e85ca253d1f458492/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476ddfe87045a6536ba27bb0746362643d20d3452b98e22e85ca253d1f458492/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476ddfe87045a6536ba27bb0746362643d20d3452b98e22e85ca253d1f458492/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:54 compute-0 podman[74798]: 2025-11-26 12:36:54.90461563 +0000 UTC m=+0.078484383 container init 3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:36:54 compute-0 podman[74798]: 2025-11-26 12:36:54.910449583 +0000 UTC m=+0.084318336 container start 3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:36:54 compute-0 podman[74798]: 2025-11-26 12:36:54.911642291 +0000 UTC m=+0.085511043 container attach 3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:36:54 compute-0 ceph-mon[74659]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 12:36:54 compute-0 ceph-mon[74659]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 26 12:36:54 compute-0 ceph-mon[74659]: fsmap 
Nov 26 12:36:54 compute-0 ceph-mon[74659]: osdmap e1: 0 total, 0 up, 0 in
Nov 26 12:36:54 compute-0 ceph-mon[74659]: mgrmap e1: no daemons active
Nov 26 12:36:54 compute-0 ceph-mon[74659]: from='client.? 192.168.122.100:0/221525536' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 26 12:36:54 compute-0 ceph-mon[74659]: from='client.? 192.168.122.100:0/24887754' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 12:36:54 compute-0 ceph-mon[74659]: from='client.? 192.168.122.100:0/24887754' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 12:36:54 compute-0 podman[74798]: 2025-11-26 12:36:54.84230228 +0000 UTC m=+0.016171053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:55 compute-0 ceph-mon[74659]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:36:55 compute-0 ceph-mon[74659]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201017770' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:36:55 compute-0 systemd[1]: libpod-3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d.scope: Deactivated successfully.
Nov 26 12:36:55 compute-0 podman[74798]: 2025-11-26 12:36:55.232230994 +0000 UTC m=+0.406099758 container died 3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:36:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-476ddfe87045a6536ba27bb0746362643d20d3452b98e22e85ca253d1f458492-merged.mount: Deactivated successfully.
Nov 26 12:36:55 compute-0 podman[74798]: 2025-11-26 12:36:55.253284837 +0000 UTC m=+0.427153590 container remove 3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 26 12:36:55 compute-0 systemd[1]: libpod-conmon-3acb116bcd8d26a79e998f07c06dd8b7b6f4b7b03dd7d8bf29a9e8544a8a313d.scope: Deactivated successfully.
Nov 26 12:36:55 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 12:36:55 compute-0 ceph-mon[74659]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 26 12:36:55 compute-0 ceph-mon[74659]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 26 12:36:55 compute-0 ceph-mon[74659]: mon.compute-0@0(leader) e1 shutdown
Nov 26 12:36:55 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0[74655]: 2025-11-26T12:36:55.377+0000 7fc184ea7640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 26 12:36:55 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0[74655]: 2025-11-26T12:36:55.377+0000 7fc184ea7640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 26 12:36:55 compute-0 ceph-mon[74659]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 12:36:55 compute-0 ceph-mon[74659]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 12:36:55 compute-0 podman[74869]: 2025-11-26 12:36:55.571916672 +0000 UTC m=+0.216745707 container died dbc7bfa56c05965b50c5f72b9ecc884eef99bde2350df7b1e35e6cb0197d6d6e (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 12:36:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8ddbe35422285337201c75b341abbc4f716cb469c9e55edb3b7035d51f06188-merged.mount: Deactivated successfully.
Nov 26 12:36:55 compute-0 podman[74869]: 2025-11-26 12:36:55.588343526 +0000 UTC m=+0.233172562 container remove dbc7bfa56c05965b50c5f72b9ecc884eef99bde2350df7b1e35e6cb0197d6d6e (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:36:55 compute-0 bash[74869]: ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0
Nov 26 12:36:55 compute-0 systemd[1]: ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@mon.compute-0.service: Deactivated successfully.
Nov 26 12:36:55 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 12:36:55 compute-0 systemd[1]: Starting Ceph mon.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 12:36:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 12:36:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 12:36:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 12:36:55 compute-0 podman[74949]: 2025-11-26 12:36:55.825626662 +0000 UTC m=+0.026159975 container create ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dedfc9236bab9bbf24c03bcf7160738704e686ab3e0d14bb389ebbc17c094ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dedfc9236bab9bbf24c03bcf7160738704e686ab3e0d14bb389ebbc17c094ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dedfc9236bab9bbf24c03bcf7160738704e686ab3e0d14bb389ebbc17c094ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dedfc9236bab9bbf24c03bcf7160738704e686ab3e0d14bb389ebbc17c094ed/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:55 compute-0 podman[74949]: 2025-11-26 12:36:55.868181241 +0000 UTC m=+0.068714564 container init ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 26 12:36:55 compute-0 podman[74949]: 2025-11-26 12:36:55.874377205 +0000 UTC m=+0.074910519 container start ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 12:36:55 compute-0 bash[74949]: ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537
Nov 26 12:36:55 compute-0 podman[74949]: 2025-11-26 12:36:55.815115766 +0000 UTC m=+0.015649099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:55 compute-0 systemd[1]: Started Ceph mon.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 12:36:55 compute-0 ceph-mon[74966]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 12:36:55 compute-0 ceph-mon[74966]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 26 12:36:55 compute-0 ceph-mon[74966]: pidfile_write: ignore empty --pid-file
Nov 26 12:36:55 compute-0 ceph-mon[74966]: load: jerasure load: lrc 
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: RocksDB version: 7.9.2
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Git sha 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: DB SUMMARY
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: DB Session ID:  S468WH7D6IL73VDKE1V5
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: CURRENT file:  CURRENT
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54266 ; 
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                         Options.error_if_exists: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                       Options.create_if_missing: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                                     Options.env: 0x560bce926c40
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                                Options.info_log: 0x560bd0ea3040
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                              Options.statistics: (nil)
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                               Options.use_fsync: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                              Options.db_log_dir: 
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                                 Options.wal_dir: 
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                    Options.write_buffer_manager: 0x560bd0eb2b40
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                  Options.unordered_write: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                               Options.row_cache: None
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                              Options.wal_filter: None
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.two_write_queues: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.wal_compression: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.atomic_flush: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.max_background_jobs: 2
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.max_background_compactions: -1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.max_subcompactions: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.max_total_wal_size: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                          Options.max_open_files: -1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:       Options.compaction_readahead_size: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Compression algorithms supported:
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         kZSTD supported: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         kXpressCompression supported: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         kBZip2Compression supported: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         kLZ4Compression supported: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         kZlibCompression supported: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         kSnappyCompression supported: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:           Options.merge_operator: 
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:        Options.compaction_filter: None
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560bd0ea2c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560bd0e9b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:        Options.write_buffer_size: 33554432
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:  Options.max_write_buffer_number: 2
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:          Options.compression: NoCompression
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.num_levels: 7
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 360f285c-8dc8-4f98-b8a2-efdebada3f64
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160615908587, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160615909926, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 53966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 131, "table_properties": {"data_size": 52525, "index_size": 147, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2994, "raw_average_key_size": 30, "raw_value_size": 50172, "raw_average_value_size": 511, "num_data_blocks": 7, "num_entries": 98, "num_filter_entries": 98, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160615, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160615910005, "job": 1, "event": "recovery_finished"}
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560bd0ec4e00
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: DB pointer 0x560bd0f4e000
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 12:36:55 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   54.60 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     48.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      2/0   54.60 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     48.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     48.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     48.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 6.99 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 6.99 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560bd0e9b1f0#2 capacity: 512.00 MB usage: 0.77 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.34 KB,6.55651e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 26 12:36:55 compute-0 ceph-mon[74966]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:36:55 compute-0 ceph-mon[74966]: mon.compute-0@-1(???) e1 preinit fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:36:55 compute-0 ceph-mon[74966]: mon.compute-0@-1(???).mds e1 new map
Nov 26 12:36:55 compute-0 ceph-mon[74966]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 26 12:36:55 compute-0 ceph-mon[74966]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 26 12:36:55 compute-0 ceph-mon[74966]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 12:36:55 compute-0 ceph-mon[74966]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 12:36:55 compute-0 ceph-mon[74966]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 12:36:55 compute-0 ceph-mon[74966]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 26 12:36:55 compute-0 ceph-mon[74966]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 26 12:36:55 compute-0 ceph-mon[74966]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 26 12:36:55 compute-0 ceph-mon[74966]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 26 12:36:55 compute-0 ceph-mon[74966]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 12:36:55 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 12:36:55 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 26 12:36:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 12:36:55 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : fsmap 
Nov 26 12:36:55 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 26 12:36:55 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 26 12:36:55 compute-0 podman[74967]: 2025-11-26 12:36:55.921007014 +0000 UTC m=+0.027659613 container create fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933 (image=quay.io/ceph/ceph:v18, name=determined_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:36:55 compute-0 systemd[1]: Started libpod-conmon-fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933.scope.
Nov 26 12:36:55 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2c947532bf5a3da7eb2c6daa68d695883687b89669fd83cd5a891aba607786/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2c947532bf5a3da7eb2c6daa68d695883687b89669fd83cd5a891aba607786/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2c947532bf5a3da7eb2c6daa68d695883687b89669fd83cd5a891aba607786/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:55 compute-0 podman[74967]: 2025-11-26 12:36:55.976690009 +0000 UTC m=+0.083342599 container init fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933 (image=quay.io/ceph/ceph:v18, name=determined_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:36:55 compute-0 podman[74967]: 2025-11-26 12:36:55.982096317 +0000 UTC m=+0.088748906 container start fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933 (image=quay.io/ceph/ceph:v18, name=determined_wescoff, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 12:36:55 compute-0 podman[74967]: 2025-11-26 12:36:55.983635628 +0000 UTC m=+0.090288237 container attach fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933 (image=quay.io/ceph/ceph:v18, name=determined_wescoff, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:36:56 compute-0 podman[74967]: 2025-11-26 12:36:55.90995148 +0000 UTC m=+0.016604089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 26 12:36:56 compute-0 systemd[1]: libpod-fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933.scope: Deactivated successfully.
Nov 26 12:36:56 compute-0 podman[74967]: 2025-11-26 12:36:56.313942694 +0000 UTC m=+0.420595283 container died fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933 (image=quay.io/ceph/ceph:v18, name=determined_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:36:56 compute-0 podman[74967]: 2025-11-26 12:36:56.335680104 +0000 UTC m=+0.442332694 container remove fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933 (image=quay.io/ceph/ceph:v18, name=determined_wescoff, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 12:36:56 compute-0 systemd[1]: libpod-conmon-fc5f1da25af723c92236300b3013694972fa2365860ba93f12e4848ed5834933.scope: Deactivated successfully.
Nov 26 12:36:56 compute-0 podman[75054]: 2025-11-26 12:36:56.376769712 +0000 UTC m=+0.026639419 container create 74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864 (image=quay.io/ceph/ceph:v18, name=stupefied_almeida, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:36:56 compute-0 systemd[1]: Started libpod-conmon-74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864.scope.
Nov 26 12:36:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:36:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48dc10a4ef776a7ee2a65386f0771c11559c8bcdbd71fc2b424a4bbdd0082b1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48dc10a4ef776a7ee2a65386f0771c11559c8bcdbd71fc2b424a4bbdd0082b1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48dc10a4ef776a7ee2a65386f0771c11559c8bcdbd71fc2b424a4bbdd0082b1a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:56 compute-0 podman[75054]: 2025-11-26 12:36:56.4291263 +0000 UTC m=+0.078996017 container init 74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864 (image=quay.io/ceph/ceph:v18, name=stupefied_almeida, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 12:36:56 compute-0 podman[75054]: 2025-11-26 12:36:56.433386438 +0000 UTC m=+0.083256135 container start 74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864 (image=quay.io/ceph/ceph:v18, name=stupefied_almeida, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:36:56 compute-0 podman[75054]: 2025-11-26 12:36:56.434779874 +0000 UTC m=+0.084649591 container attach 74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864 (image=quay.io/ceph/ceph:v18, name=stupefied_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:36:56 compute-0 podman[75054]: 2025-11-26 12:36:56.366156203 +0000 UTC m=+0.016025920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 26 12:36:56 compute-0 systemd[1]: libpod-74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864.scope: Deactivated successfully.
Nov 26 12:36:56 compute-0 conmon[75069]: conmon 74422ccac611b554fca8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864.scope/container/memory.events
Nov 26 12:36:56 compute-0 podman[75054]: 2025-11-26 12:36:56.764864581 +0000 UTC m=+0.414734277 container died 74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864 (image=quay.io/ceph/ceph:v18, name=stupefied_almeida, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 12:36:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-48dc10a4ef776a7ee2a65386f0771c11559c8bcdbd71fc2b424a4bbdd0082b1a-merged.mount: Deactivated successfully.
Nov 26 12:36:56 compute-0 podman[75054]: 2025-11-26 12:36:56.785202525 +0000 UTC m=+0.435072222 container remove 74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864 (image=quay.io/ceph/ceph:v18, name=stupefied_almeida, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:36:56 compute-0 systemd[1]: libpod-conmon-74422ccac611b554fca80e6ea17d8579b6ffb7623de3337e7ca9015135c20864.scope: Deactivated successfully.
Nov 26 12:36:56 compute-0 systemd[1]: Reloading.
Nov 26 12:36:56 compute-0 systemd-rc-local-generator[75124]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:36:56 compute-0 systemd-sysv-generator[75128]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:36:56 compute-0 ceph-mon[74966]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 12:36:56 compute-0 ceph-mon[74966]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 26 12:36:56 compute-0 ceph-mon[74966]: fsmap 
Nov 26 12:36:56 compute-0 ceph-mon[74966]: osdmap e1: 0 total, 0 up, 0 in
Nov 26 12:36:56 compute-0 ceph-mon[74966]: mgrmap e1: no daemons active
Nov 26 12:36:57 compute-0 systemd[1]: Reloading.
Nov 26 12:36:57 compute-0 systemd-sysv-generator[75168]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:36:57 compute-0 systemd-rc-local-generator[75163]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:36:57 compute-0 systemd[1]: Starting Ceph mgr.compute-0.whkbdn for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 12:36:57 compute-0 podman[75220]: 2025-11-26 12:36:57.353059008 +0000 UTC m=+0.027097903 container create c06d21624ca8869dd82756bdbc9957ce848a0aa0b6a72b8cb547377849a6a817 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 12:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eae1bd290cade33e00ffe53834b366c548e11123a8a82238aa7c5d798c68d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eae1bd290cade33e00ffe53834b366c548e11123a8a82238aa7c5d798c68d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eae1bd290cade33e00ffe53834b366c548e11123a8a82238aa7c5d798c68d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eae1bd290cade33e00ffe53834b366c548e11123a8a82238aa7c5d798c68d4/merged/var/lib/ceph/mgr/ceph-compute-0.whkbdn supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:57 compute-0 podman[75220]: 2025-11-26 12:36:57.391054315 +0000 UTC m=+0.065093220 container init c06d21624ca8869dd82756bdbc9957ce848a0aa0b6a72b8cb547377849a6a817 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 12:36:57 compute-0 podman[75220]: 2025-11-26 12:36:57.394927533 +0000 UTC m=+0.068966428 container start c06d21624ca8869dd82756bdbc9957ce848a0aa0b6a72b8cb547377849a6a817 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 12:36:57 compute-0 bash[75220]: c06d21624ca8869dd82756bdbc9957ce848a0aa0b6a72b8cb547377849a6a817
Nov 26 12:36:57 compute-0 podman[75220]: 2025-11-26 12:36:57.341230639 +0000 UTC m=+0.015269545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:57 compute-0 systemd[1]: Started Ceph mgr.compute-0.whkbdn for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 12:36:57 compute-0 ceph-mgr[75236]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 12:36:57 compute-0 ceph-mgr[75236]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 26 12:36:57 compute-0 ceph-mgr[75236]: pidfile_write: ignore empty --pid-file
Nov 26 12:36:57 compute-0 podman[75237]: 2025-11-26 12:36:57.444915909 +0000 UTC m=+0.028053694 container create 7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b (image=quay.io/ceph/ceph:v18, name=sad_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:36:57 compute-0 systemd[1]: Started libpod-conmon-7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b.scope.
Nov 26 12:36:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df39d75f067f561eff3c88876ca95edbcc9d00fb921b628e291e11d799d63ecf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df39d75f067f561eff3c88876ca95edbcc9d00fb921b628e291e11d799d63ecf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df39d75f067f561eff3c88876ca95edbcc9d00fb921b628e291e11d799d63ecf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:57 compute-0 podman[75237]: 2025-11-26 12:36:57.500784285 +0000 UTC m=+0.083922071 container init 7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b (image=quay.io/ceph/ceph:v18, name=sad_cerf, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 12:36:57 compute-0 podman[75237]: 2025-11-26 12:36:57.506091896 +0000 UTC m=+0.089229682 container start 7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b (image=quay.io/ceph/ceph:v18, name=sad_cerf, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 12:36:57 compute-0 podman[75237]: 2025-11-26 12:36:57.507184836 +0000 UTC m=+0.090322621 container attach 7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b (image=quay.io/ceph/ceph:v18, name=sad_cerf, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:36:57 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'alerts'
Nov 26 12:36:57 compute-0 podman[75237]: 2025-11-26 12:36:57.434364216 +0000 UTC m=+0.017502021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:36:57 compute-0 ceph-mgr[75236]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 12:36:57 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'balancer'
Nov 26 12:36:57 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:36:57.784+0000 7f954fa56140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 12:36:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 12:36:57 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/100115608' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:36:57 compute-0 sad_cerf[75274]: 
Nov 26 12:36:57 compute-0 sad_cerf[75274]: {
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     "health": {
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "status": "HEALTH_OK",
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "checks": {},
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "mutes": []
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     },
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     "election_epoch": 5,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     "quorum": [
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         0
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     ],
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     "quorum_names": [
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "compute-0"
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     ],
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     "quorum_age": 1,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     "monmap": {
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "epoch": 1,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "min_mon_release_name": "reef",
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "num_mons": 1
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     },
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     "osdmap": {
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "epoch": 1,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "num_osds": 0,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "num_up_osds": 0,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "osd_up_since": 0,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "num_in_osds": 0,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "osd_in_since": 0,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "num_remapped_pgs": 0
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     },
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     "pgmap": {
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "pgs_by_state": [],
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "num_pgs": 0,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "num_pools": 0,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "num_objects": 0,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "data_bytes": 0,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "bytes_used": 0,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "bytes_avail": 0,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "bytes_total": 0
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     },
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     "fsmap": {
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "epoch": 1,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "by_rank": [],
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "up:standby": 0
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     },
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     "mgrmap": {
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "available": false,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "num_standbys": 0,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "modules": [
Nov 26 12:36:57 compute-0 sad_cerf[75274]:             "iostat",
Nov 26 12:36:57 compute-0 sad_cerf[75274]:             "nfs",
Nov 26 12:36:57 compute-0 sad_cerf[75274]:             "restful"
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         ],
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "services": {}
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     },
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     "servicemap": {
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "epoch": 1,
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 12:36:57 compute-0 sad_cerf[75274]:         "services": {}
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     },
Nov 26 12:36:57 compute-0 sad_cerf[75274]:     "progress_events": {}
Nov 26 12:36:57 compute-0 sad_cerf[75274]: }
Nov 26 12:36:57 compute-0 systemd[1]: libpod-7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b.scope: Deactivated successfully.
Nov 26 12:36:57 compute-0 podman[75237]: 2025-11-26 12:36:57.831739412 +0000 UTC m=+0.414877197 container died 7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b (image=quay.io/ceph/ceph:v18, name=sad_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:36:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-df39d75f067f561eff3c88876ca95edbcc9d00fb921b628e291e11d799d63ecf-merged.mount: Deactivated successfully.
Nov 26 12:36:57 compute-0 podman[75237]: 2025-11-26 12:36:57.861737379 +0000 UTC m=+0.444875164 container remove 7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b (image=quay.io/ceph/ceph:v18, name=sad_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:36:57 compute-0 systemd[1]: libpod-conmon-7bbde913a86ae6194145ce9f3311f706d3113d4f6d050c80008ee7a677ee8f5b.scope: Deactivated successfully.
Nov 26 12:36:57 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/100115608' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:36:58 compute-0 ceph-mgr[75236]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 12:36:58 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'cephadm'
Nov 26 12:36:58 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:36:58.016+0000 7f954fa56140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 12:36:59 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'crash'
Nov 26 12:36:59 compute-0 ceph-mgr[75236]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 12:36:59 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'dashboard'
Nov 26 12:36:59 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:36:59.875+0000 7f954fa56140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 12:36:59 compute-0 podman[75321]: 2025-11-26 12:36:59.90727008 +0000 UTC m=+0.027975739 container create 2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723 (image=quay.io/ceph/ceph:v18, name=gracious_elbakyan, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 12:36:59 compute-0 systemd[1]: Started libpod-conmon-2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723.scope.
Nov 26 12:36:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6087f78a74b64b90b02ea7deb50a8ff37298e1b19b79d6298ef56e877a7fbbb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6087f78a74b64b90b02ea7deb50a8ff37298e1b19b79d6298ef56e877a7fbbb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6087f78a74b64b90b02ea7deb50a8ff37298e1b19b79d6298ef56e877a7fbbb7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:36:59 compute-0 podman[75321]: 2025-11-26 12:36:59.954351848 +0000 UTC m=+0.075057507 container init 2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723 (image=quay.io/ceph/ceph:v18, name=gracious_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 12:36:59 compute-0 podman[75321]: 2025-11-26 12:36:59.959533603 +0000 UTC m=+0.080239261 container start 2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723 (image=quay.io/ceph/ceph:v18, name=gracious_elbakyan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 12:36:59 compute-0 podman[75321]: 2025-11-26 12:36:59.963826422 +0000 UTC m=+0.084532100 container attach 2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723 (image=quay.io/ceph/ceph:v18, name=gracious_elbakyan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:36:59 compute-0 podman[75321]: 2025-11-26 12:36:59.895444526 +0000 UTC m=+0.016150194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 12:37:00 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426630928' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]: 
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]: {
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     "health": {
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "status": "HEALTH_OK",
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "checks": {},
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "mutes": []
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     },
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     "election_epoch": 5,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     "quorum": [
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         0
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     ],
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     "quorum_names": [
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "compute-0"
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     ],
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     "quorum_age": 4,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     "monmap": {
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "epoch": 1,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "min_mon_release_name": "reef",
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "num_mons": 1
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     },
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     "osdmap": {
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "epoch": 1,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "num_osds": 0,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "num_up_osds": 0,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "osd_up_since": 0,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "num_in_osds": 0,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "osd_in_since": 0,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "num_remapped_pgs": 0
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     },
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     "pgmap": {
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "pgs_by_state": [],
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "num_pgs": 0,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "num_pools": 0,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "num_objects": 0,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "data_bytes": 0,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "bytes_used": 0,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "bytes_avail": 0,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "bytes_total": 0
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     },
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     "fsmap": {
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "epoch": 1,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "by_rank": [],
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "up:standby": 0
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     },
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     "mgrmap": {
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "available": false,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "num_standbys": 0,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "modules": [
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:             "iostat",
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:             "nfs",
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:             "restful"
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         ],
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "services": {}
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     },
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     "servicemap": {
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "epoch": 1,
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:         "services": {}
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     },
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]:     "progress_events": {}
Nov 26 12:37:00 compute-0 gracious_elbakyan[75334]: }
Nov 26 12:37:00 compute-0 systemd[1]: libpod-2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723.scope: Deactivated successfully.
Nov 26 12:37:00 compute-0 podman[75321]: 2025-11-26 12:37:00.28032621 +0000 UTC m=+0.401031868 container died 2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723 (image=quay.io/ceph/ceph:v18, name=gracious_elbakyan, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 12:37:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6087f78a74b64b90b02ea7deb50a8ff37298e1b19b79d6298ef56e877a7fbbb7-merged.mount: Deactivated successfully.
Nov 26 12:37:00 compute-0 podman[75321]: 2025-11-26 12:37:00.304139172 +0000 UTC m=+0.424844830 container remove 2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723 (image=quay.io/ceph/ceph:v18, name=gracious_elbakyan, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:37:00 compute-0 systemd[1]: libpod-conmon-2716cee07ddd5dba14b071caded2838f006f88de46e90899fa714b7ff728c723.scope: Deactivated successfully.
Nov 26 12:37:00 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3426630928' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:01 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'devicehealth'
Nov 26 12:37:01 compute-0 ceph-mgr[75236]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 12:37:01 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'diskprediction_local'
Nov 26 12:37:01 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:01.308+0000 7f954fa56140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 12:37:01 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 26 12:37:01 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 26 12:37:01 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]:   from numpy import show_config as show_numpy_config
Nov 26 12:37:01 compute-0 ceph-mgr[75236]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 12:37:01 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'influx'
Nov 26 12:37:01 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:01.766+0000 7f954fa56140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 12:37:01 compute-0 ceph-mgr[75236]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 12:37:01 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'insights'
Nov 26 12:37:01 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:01.974+0000 7f954fa56140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 12:37:02 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'iostat'
Nov 26 12:37:02 compute-0 podman[75371]: 2025-11-26 12:37:02.34640125 +0000 UTC m=+0.026399227 container create 3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352 (image=quay.io/ceph/ceph:v18, name=gifted_mccarthy, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:02 compute-0 systemd[1]: Started libpod-conmon-3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352.scope.
Nov 26 12:37:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8cd5fda00974aaf9948863ef66d89eca722f9dd22e8b4805031b1fededbe0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8cd5fda00974aaf9948863ef66d89eca722f9dd22e8b4805031b1fededbe0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8cd5fda00974aaf9948863ef66d89eca722f9dd22e8b4805031b1fededbe0e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:02 compute-0 ceph-mgr[75236]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 12:37:02 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'k8sevents'
Nov 26 12:37:02 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:02.391+0000 7f954fa56140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 12:37:02 compute-0 podman[75371]: 2025-11-26 12:37:02.396021763 +0000 UTC m=+0.076019740 container init 3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352 (image=quay.io/ceph/ceph:v18, name=gifted_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 26 12:37:02 compute-0 podman[75371]: 2025-11-26 12:37:02.400396407 +0000 UTC m=+0.080394414 container start 3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352 (image=quay.io/ceph/ceph:v18, name=gifted_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:02 compute-0 podman[75371]: 2025-11-26 12:37:02.401458016 +0000 UTC m=+0.081455994 container attach 3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352 (image=quay.io/ceph/ceph:v18, name=gifted_mccarthy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 12:37:02 compute-0 podman[75371]: 2025-11-26 12:37:02.33538957 +0000 UTC m=+0.015387567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 12:37:02 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654192452' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]: 
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]: {
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     "health": {
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "status": "HEALTH_OK",
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "checks": {},
Nov 26 12:37:02 compute-0 chronyd[58583]: Selected source 104.131.155.175 (pool.ntp.org)
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "mutes": []
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     },
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     "election_epoch": 5,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     "quorum": [
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         0
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     ],
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     "quorum_names": [
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "compute-0"
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     ],
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     "quorum_age": 6,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     "monmap": {
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "epoch": 1,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "min_mon_release_name": "reef",
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "num_mons": 1
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     },
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     "osdmap": {
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "epoch": 1,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "num_osds": 0,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "num_up_osds": 0,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "osd_up_since": 0,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "num_in_osds": 0,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "osd_in_since": 0,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "num_remapped_pgs": 0
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     },
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     "pgmap": {
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "pgs_by_state": [],
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "num_pgs": 0,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "num_pools": 0,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "num_objects": 0,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "data_bytes": 0,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "bytes_used": 0,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "bytes_avail": 0,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "bytes_total": 0
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     },
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     "fsmap": {
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "epoch": 1,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "by_rank": [],
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "up:standby": 0
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     },
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     "mgrmap": {
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "available": false,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "num_standbys": 0,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "modules": [
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:             "iostat",
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:             "nfs",
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:             "restful"
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         ],
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "services": {}
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     },
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     "servicemap": {
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "epoch": 1,
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:         "services": {}
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     },
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]:     "progress_events": {}
Nov 26 12:37:02 compute-0 gifted_mccarthy[75383]: }
Nov 26 12:37:02 compute-0 systemd[1]: libpod-3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352.scope: Deactivated successfully.
Nov 26 12:37:02 compute-0 podman[75371]: 2025-11-26 12:37:02.725594102 +0000 UTC m=+0.405592080 container died 3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352 (image=quay.io/ceph/ceph:v18, name=gifted_mccarthy, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb8cd5fda00974aaf9948863ef66d89eca722f9dd22e8b4805031b1fededbe0e-merged.mount: Deactivated successfully.
Nov 26 12:37:02 compute-0 podman[75371]: 2025-11-26 12:37:02.756566091 +0000 UTC m=+0.436564068 container remove 3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352 (image=quay.io/ceph/ceph:v18, name=gifted_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 12:37:02 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/654192452' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:02 compute-0 systemd[1]: libpod-conmon-3c069ec8e94d232b4780282cb8916847c538116cbf1c443162d6aad54d983352.scope: Deactivated successfully.
Nov 26 12:37:03 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'localpool'
Nov 26 12:37:04 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'mds_autoscaler'
Nov 26 12:37:04 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'mirroring'
Nov 26 12:37:04 compute-0 podman[75418]: 2025-11-26 12:37:04.799540045 +0000 UTC m=+0.025582277 container create e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:37:04 compute-0 systemd[1]: Started libpod-conmon-e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb.scope.
Nov 26 12:37:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0bf6284ebb8ce98924b55caa0590ef0756d3dc72d2f90f9a3814fcac4dc141/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0bf6284ebb8ce98924b55caa0590ef0756d3dc72d2f90f9a3814fcac4dc141/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0bf6284ebb8ce98924b55caa0590ef0756d3dc72d2f90f9a3814fcac4dc141/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:04 compute-0 podman[75418]: 2025-11-26 12:37:04.843604028 +0000 UTC m=+0.069646279 container init e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 12:37:04 compute-0 podman[75418]: 2025-11-26 12:37:04.848068982 +0000 UTC m=+0.074111212 container start e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 12:37:04 compute-0 podman[75418]: 2025-11-26 12:37:04.84978714 +0000 UTC m=+0.075829381 container attach e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 12:37:04 compute-0 podman[75418]: 2025-11-26 12:37:04.789110664 +0000 UTC m=+0.015152915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:04 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'nfs'
Nov 26 12:37:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 12:37:05 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3521650800' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]: 
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]: {
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     "health": {
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "status": "HEALTH_OK",
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "checks": {},
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "mutes": []
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     },
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     "election_epoch": 5,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     "quorum": [
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         0
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     ],
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     "quorum_names": [
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "compute-0"
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     ],
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     "quorum_age": 9,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     "monmap": {
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "epoch": 1,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "min_mon_release_name": "reef",
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "num_mons": 1
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     },
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     "osdmap": {
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "epoch": 1,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "num_osds": 0,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "num_up_osds": 0,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "osd_up_since": 0,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "num_in_osds": 0,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "osd_in_since": 0,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "num_remapped_pgs": 0
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     },
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     "pgmap": {
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "pgs_by_state": [],
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "num_pgs": 0,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "num_pools": 0,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "num_objects": 0,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "data_bytes": 0,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "bytes_used": 0,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "bytes_avail": 0,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "bytes_total": 0
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     },
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     "fsmap": {
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "epoch": 1,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "by_rank": [],
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "up:standby": 0
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     },
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     "mgrmap": {
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "available": false,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "num_standbys": 0,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "modules": [
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:             "iostat",
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:             "nfs",
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:             "restful"
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         ],
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "services": {}
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     },
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     "servicemap": {
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "epoch": 1,
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:         "services": {}
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     },
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]:     "progress_events": {}
Nov 26 12:37:05 compute-0 upbeat_brattain[75432]: }
Nov 26 12:37:05 compute-0 systemd[1]: libpod-e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb.scope: Deactivated successfully.
Nov 26 12:37:05 compute-0 podman[75418]: 2025-11-26 12:37:05.174366382 +0000 UTC m=+0.400408613 container died e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac0bf6284ebb8ce98924b55caa0590ef0756d3dc72d2f90f9a3814fcac4dc141-merged.mount: Deactivated successfully.
Nov 26 12:37:05 compute-0 podman[75418]: 2025-11-26 12:37:05.195588251 +0000 UTC m=+0.421630483 container remove e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb (image=quay.io/ceph/ceph:v18, name=upbeat_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 12:37:05 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3521650800' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:05 compute-0 systemd[1]: libpod-conmon-e0411e8ebb9eb1531bbd8dc0eb8d6c4ddac9ed03ef4677605b5c5c6868754ffb.scope: Deactivated successfully.
Nov 26 12:37:05 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:05.520+0000 7f954fa56140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 12:37:05 compute-0 ceph-mgr[75236]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 12:37:05 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'orchestrator'
Nov 26 12:37:06 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:06.097+0000 7f954fa56140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 12:37:06 compute-0 ceph-mgr[75236]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 12:37:06 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'osd_perf_query'
Nov 26 12:37:06 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:06.330+0000 7f954fa56140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 12:37:06 compute-0 ceph-mgr[75236]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 12:37:06 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'osd_support'
Nov 26 12:37:06 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:06.536+0000 7f954fa56140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 12:37:06 compute-0 ceph-mgr[75236]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 12:37:06 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'pg_autoscaler'
Nov 26 12:37:06 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:06.773+0000 7f954fa56140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 12:37:06 compute-0 ceph-mgr[75236]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 12:37:06 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'progress'
Nov 26 12:37:06 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:06.981+0000 7f954fa56140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 12:37:06 compute-0 ceph-mgr[75236]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 12:37:06 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'prometheus'
Nov 26 12:37:07 compute-0 podman[75468]: 2025-11-26 12:37:07.237710246 +0000 UTC m=+0.026295702 container create 416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af (image=quay.io/ceph/ceph:v18, name=gallant_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 12:37:07 compute-0 systemd[1]: Started libpod-conmon-416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af.scope.
Nov 26 12:37:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c0c7388464e9c24bb45701e3d906ba3471b6f53ac2156485f6686113e485d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c0c7388464e9c24bb45701e3d906ba3471b6f53ac2156485f6686113e485d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c0c7388464e9c24bb45701e3d906ba3471b6f53ac2156485f6686113e485d8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:07 compute-0 podman[75468]: 2025-11-26 12:37:07.281827148 +0000 UTC m=+0.070412614 container init 416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af (image=quay.io/ceph/ceph:v18, name=gallant_maxwell, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 12:37:07 compute-0 podman[75468]: 2025-11-26 12:37:07.28658858 +0000 UTC m=+0.075174035 container start 416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af (image=quay.io/ceph/ceph:v18, name=gallant_maxwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 26 12:37:07 compute-0 podman[75468]: 2025-11-26 12:37:07.288821708 +0000 UTC m=+0.077407164 container attach 416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af (image=quay.io/ceph/ceph:v18, name=gallant_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 12:37:07 compute-0 podman[75468]: 2025-11-26 12:37:07.227065086 +0000 UTC m=+0.015650563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:07 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 12:37:07 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/788382095' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]: 
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]: {
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     "health": {
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "status": "HEALTH_OK",
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "checks": {},
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "mutes": []
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     },
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     "election_epoch": 5,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     "quorum": [
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         0
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     ],
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     "quorum_names": [
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "compute-0"
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     ],
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     "quorum_age": 11,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     "monmap": {
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "epoch": 1,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "min_mon_release_name": "reef",
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "num_mons": 1
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     },
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     "osdmap": {
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "epoch": 1,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "num_osds": 0,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "num_up_osds": 0,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "osd_up_since": 0,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "num_in_osds": 0,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "osd_in_since": 0,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "num_remapped_pgs": 0
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     },
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     "pgmap": {
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "pgs_by_state": [],
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "num_pgs": 0,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "num_pools": 0,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "num_objects": 0,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "data_bytes": 0,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "bytes_used": 0,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "bytes_avail": 0,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "bytes_total": 0
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     },
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     "fsmap": {
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "epoch": 1,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "by_rank": [],
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "up:standby": 0
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     },
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     "mgrmap": {
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "available": false,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "num_standbys": 0,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "modules": [
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:             "iostat",
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:             "nfs",
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:             "restful"
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         ],
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "services": {}
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     },
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     "servicemap": {
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "epoch": 1,
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:         "services": {}
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     },
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]:     "progress_events": {}
Nov 26 12:37:07 compute-0 gallant_maxwell[75482]: }
Nov 26 12:37:07 compute-0 systemd[1]: libpod-416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af.scope: Deactivated successfully.
Nov 26 12:37:07 compute-0 podman[75508]: 2025-11-26 12:37:07.637445569 +0000 UTC m=+0.016011412 container died 416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af (image=quay.io/ceph/ceph:v18, name=gallant_maxwell, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:37:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-51c0c7388464e9c24bb45701e3d906ba3471b6f53ac2156485f6686113e485d8-merged.mount: Deactivated successfully.
Nov 26 12:37:07 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/788382095' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:07 compute-0 podman[75508]: 2025-11-26 12:37:07.658253158 +0000 UTC m=+0.036819000 container remove 416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af (image=quay.io/ceph/ceph:v18, name=gallant_maxwell, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:07 compute-0 systemd[1]: libpod-conmon-416f31ec3fe96526cc4685594c9773f31292ae36541ed147cf0cd1fb6de8c9af.scope: Deactivated successfully.
Nov 26 12:37:07 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:07.855+0000 7f954fa56140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 12:37:07 compute-0 ceph-mgr[75236]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 12:37:07 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'rbd_support'
Nov 26 12:37:08 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:08.116+0000 7f954fa56140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 12:37:08 compute-0 ceph-mgr[75236]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 12:37:08 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'restful'
Nov 26 12:37:08 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'rgw'
Nov 26 12:37:09 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:09.342+0000 7f954fa56140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 12:37:09 compute-0 ceph-mgr[75236]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 12:37:09 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'rook'
Nov 26 12:37:09 compute-0 podman[75520]: 2025-11-26 12:37:09.702150948 +0000 UTC m=+0.025671475 container create 47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0 (image=quay.io/ceph/ceph:v18, name=vigorous_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 12:37:09 compute-0 systemd[1]: Started libpod-conmon-47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0.scope.
Nov 26 12:37:09 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314271cfff462579c94ba03f6d2235107c6b2c32f0d8ccdbf6a529e17bcae51b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314271cfff462579c94ba03f6d2235107c6b2c32f0d8ccdbf6a529e17bcae51b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314271cfff462579c94ba03f6d2235107c6b2c32f0d8ccdbf6a529e17bcae51b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:09 compute-0 podman[75520]: 2025-11-26 12:37:09.751407225 +0000 UTC m=+0.074927762 container init 47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0 (image=quay.io/ceph/ceph:v18, name=vigorous_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 12:37:09 compute-0 podman[75520]: 2025-11-26 12:37:09.755219959 +0000 UTC m=+0.078740486 container start 47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0 (image=quay.io/ceph/ceph:v18, name=vigorous_yalow, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:37:09 compute-0 podman[75520]: 2025-11-26 12:37:09.756290326 +0000 UTC m=+0.079810853 container attach 47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0 (image=quay.io/ceph/ceph:v18, name=vigorous_yalow, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 12:37:09 compute-0 podman[75520]: 2025-11-26 12:37:09.691244857 +0000 UTC m=+0.014765404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 12:37:10 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3569570021' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]: 
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]: {
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     "health": {
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "status": "HEALTH_OK",
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "checks": {},
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "mutes": []
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     },
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     "election_epoch": 5,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     "quorum": [
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         0
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     ],
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     "quorum_names": [
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "compute-0"
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     ],
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     "quorum_age": 14,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     "monmap": {
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "epoch": 1,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "min_mon_release_name": "reef",
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "num_mons": 1
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     },
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     "osdmap": {
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "epoch": 1,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "num_osds": 0,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "num_up_osds": 0,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "osd_up_since": 0,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "num_in_osds": 0,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "osd_in_since": 0,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "num_remapped_pgs": 0
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     },
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     "pgmap": {
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "pgs_by_state": [],
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "num_pgs": 0,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "num_pools": 0,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "num_objects": 0,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "data_bytes": 0,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "bytes_used": 0,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "bytes_avail": 0,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "bytes_total": 0
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     },
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     "fsmap": {
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "epoch": 1,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "by_rank": [],
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "up:standby": 0
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     },
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     "mgrmap": {
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "available": false,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "num_standbys": 0,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "modules": [
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:             "iostat",
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:             "nfs",
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:             "restful"
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         ],
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "services": {}
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     },
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     "servicemap": {
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "epoch": 1,
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:         "services": {}
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     },
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]:     "progress_events": {}
Nov 26 12:37:10 compute-0 vigorous_yalow[75533]: }
Nov 26 12:37:10 compute-0 systemd[1]: libpod-47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0.scope: Deactivated successfully.
Nov 26 12:37:10 compute-0 podman[75520]: 2025-11-26 12:37:10.079256078 +0000 UTC m=+0.402776605 container died 47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0 (image=quay.io/ceph/ceph:v18, name=vigorous_yalow, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 12:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-314271cfff462579c94ba03f6d2235107c6b2c32f0d8ccdbf6a529e17bcae51b-merged.mount: Deactivated successfully.
Nov 26 12:37:10 compute-0 podman[75520]: 2025-11-26 12:37:10.101148441 +0000 UTC m=+0.424668968 container remove 47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0 (image=quay.io/ceph/ceph:v18, name=vigorous_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 12:37:10 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3569570021' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:10 compute-0 systemd[1]: libpod-conmon-47ee2f56ac07c6b62be21775291dda515489ec628e10ea5328584a9b56ad78c0.scope: Deactivated successfully.
Nov 26 12:37:11 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:11.137+0000 7f954fa56140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 12:37:11 compute-0 ceph-mgr[75236]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 12:37:11 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'selftest'
Nov 26 12:37:11 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:11.349+0000 7f954fa56140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 12:37:11 compute-0 ceph-mgr[75236]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 12:37:11 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'snap_schedule'
Nov 26 12:37:11 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:11.566+0000 7f954fa56140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 12:37:11 compute-0 ceph-mgr[75236]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 12:37:11 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'stats'
Nov 26 12:37:11 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'status'
Nov 26 12:37:12 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:12.008+0000 7f954fa56140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 12:37:12 compute-0 ceph-mgr[75236]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 12:37:12 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'telegraf'
Nov 26 12:37:12 compute-0 podman[75569]: 2025-11-26 12:37:12.143096347 +0000 UTC m=+0.025741777 container create 902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:12 compute-0 systemd[1]: Started libpod-conmon-902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa.scope.
Nov 26 12:37:12 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af8e84016a22324c9245c04300e0ba0a1da3eb2e412be0bc04b0eac2c407b379/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af8e84016a22324c9245c04300e0ba0a1da3eb2e412be0bc04b0eac2c407b379/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af8e84016a22324c9245c04300e0ba0a1da3eb2e412be0bc04b0eac2c407b379/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:12 compute-0 podman[75569]: 2025-11-26 12:37:12.197103646 +0000 UTC m=+0.079749076 container init 902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:12 compute-0 podman[75569]: 2025-11-26 12:37:12.201391506 +0000 UTC m=+0.084036937 container start 902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 12:37:12 compute-0 podman[75569]: 2025-11-26 12:37:12.202463256 +0000 UTC m=+0.085108686 container attach 902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 12:37:12 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:12.214+0000 7f954fa56140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 12:37:12 compute-0 ceph-mgr[75236]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 12:37:12 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'telemetry'
Nov 26 12:37:12 compute-0 podman[75569]: 2025-11-26 12:37:12.132386075 +0000 UTC m=+0.015031525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:12 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 12:37:12 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2517756581' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]: 
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]: {
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     "health": {
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "status": "HEALTH_OK",
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "checks": {},
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "mutes": []
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     },
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     "election_epoch": 5,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     "quorum": [
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         0
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     ],
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     "quorum_names": [
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "compute-0"
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     ],
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     "quorum_age": 16,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     "monmap": {
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "epoch": 1,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "min_mon_release_name": "reef",
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "num_mons": 1
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     },
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     "osdmap": {
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "epoch": 1,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "num_osds": 0,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "num_up_osds": 0,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "osd_up_since": 0,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "num_in_osds": 0,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "osd_in_since": 0,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "num_remapped_pgs": 0
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     },
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     "pgmap": {
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "pgs_by_state": [],
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "num_pgs": 0,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "num_pools": 0,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "num_objects": 0,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "data_bytes": 0,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "bytes_used": 0,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "bytes_avail": 0,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "bytes_total": 0
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     },
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     "fsmap": {
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "epoch": 1,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "by_rank": [],
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "up:standby": 0
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     },
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     "mgrmap": {
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "available": false,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "num_standbys": 0,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "modules": [
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:             "iostat",
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:             "nfs",
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:             "restful"
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         ],
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "services": {}
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     },
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     "servicemap": {
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "epoch": 1,
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:         "services": {}
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     },
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]:     "progress_events": {}
Nov 26 12:37:12 compute-0 gracious_visvesvaraya[75583]: }
Nov 26 12:37:12 compute-0 systemd[1]: libpod-902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa.scope: Deactivated successfully.
Nov 26 12:37:12 compute-0 podman[75569]: 2025-11-26 12:37:12.524593473 +0000 UTC m=+0.407238913 container died 902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-af8e84016a22324c9245c04300e0ba0a1da3eb2e412be0bc04b0eac2c407b379-merged.mount: Deactivated successfully.
Nov 26 12:37:12 compute-0 podman[75569]: 2025-11-26 12:37:12.549547706 +0000 UTC m=+0.432193135 container remove 902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:12 compute-0 systemd[1]: libpod-conmon-902535c376d03cb7cfa57507fa9de3ea6377ed650a56e54e69217d0dd09c99fa.scope: Deactivated successfully.
Nov 26 12:37:12 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2517756581' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:12 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:12.730+0000 7f954fa56140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 12:37:12 compute-0 ceph-mgr[75236]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 12:37:12 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'test_orchestrator'
Nov 26 12:37:13 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:13.305+0000 7f954fa56140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 12:37:13 compute-0 ceph-mgr[75236]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 12:37:13 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'volumes'
Nov 26 12:37:13 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:13.919+0000 7f954fa56140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 12:37:13 compute-0 ceph-mgr[75236]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 12:37:13 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'zabbix'
Nov 26 12:37:14 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:14.129+0000 7f954fa56140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: ms_deliver_dispatch: unhandled message 0x563413f2e420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 26 12:37:14 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.whkbdn
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr handle_mgr_map Activating!
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr handle_mgr_map I am now activating
Nov 26 12:37:14 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.whkbdn(active, starting, since 0.00515095s)
Nov 26 12:37:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 26 12:37:14 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 26 12:37:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).mds e1 all = 1
Nov 26 12:37:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 26 12:37:14 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 12:37:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 26 12:37:14 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 26 12:37:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 26 12:37:14 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 12:37:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.whkbdn", "id": "compute-0.whkbdn"} v 0) v1
Nov 26 12:37:14 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mgr metadata", "who": "compute-0.whkbdn", "id": "compute-0.whkbdn"}]: dispatch
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: balancer
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: crash
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [balancer INFO root] Starting
Nov 26 12:37:14 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : Manager daemon compute-0.whkbdn is now available
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: devicehealth
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:37:14
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [balancer INFO root] No pools available
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [devicehealth INFO root] Starting
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: iostat
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: nfs
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: orchestrator
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: pg_autoscaler
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: progress
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [progress INFO root] Loading...
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [progress INFO root] No stored events to load
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [progress INFO root] Loaded [] historic events
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [progress INFO root] Loaded OSDMap, ready.
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [rbd_support INFO root] recovery thread starting
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [rbd_support INFO root] starting setup
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: rbd_support
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: restful
Nov 26 12:37:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/mirror_snapshot_schedule"} v 0) v1
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [restful INFO root] server_addr: :: server_port: 8003
Nov 26 12:37:14 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/mirror_snapshot_schedule"}]: dispatch
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: status
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [restful WARNING root] server not running: no certificate configured
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: telemetry
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 26 12:37:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [rbd_support INFO root] PerfHandler: starting
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TaskHandler: starting
Nov 26 12:37:14 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/trash_purge_schedule"} v 0) v1
Nov 26 12:37:14 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/trash_purge_schedule"}]: dispatch
Nov 26 12:37:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: [rbd_support INFO root] setup complete
Nov 26 12:37:14 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 26 12:37:14 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:14 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: volumes
Nov 26 12:37:14 compute-0 ceph-mon[74966]: Activating manager daemon compute-0.whkbdn
Nov 26 12:37:14 compute-0 ceph-mon[74966]: mgrmap e2: compute-0.whkbdn(active, starting, since 0.00515095s)
Nov 26 12:37:14 compute-0 ceph-mon[74966]: from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 26 12:37:14 compute-0 ceph-mon[74966]: from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 12:37:14 compute-0 ceph-mon[74966]: from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 26 12:37:14 compute-0 ceph-mon[74966]: from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 12:37:14 compute-0 ceph-mon[74966]: from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mgr metadata", "who": "compute-0.whkbdn", "id": "compute-0.whkbdn"}]: dispatch
Nov 26 12:37:14 compute-0 ceph-mon[74966]: Manager daemon compute-0.whkbdn is now available
Nov 26 12:37:14 compute-0 ceph-mon[74966]: from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/mirror_snapshot_schedule"}]: dispatch
Nov 26 12:37:14 compute-0 ceph-mon[74966]: from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:14 compute-0 ceph-mon[74966]: from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/trash_purge_schedule"}]: dispatch
Nov 26 12:37:14 compute-0 ceph-mon[74966]: from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:14 compute-0 ceph-mon[74966]: from='mgr.14102 192.168.122.100:0/3660257186' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:14 compute-0 podman[75697]: 2025-11-26 12:37:14.590136721 +0000 UTC m=+0.025052570 container create 87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f (image=quay.io/ceph/ceph:v18, name=priceless_payne, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 12:37:14 compute-0 systemd[1]: Started libpod-conmon-87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f.scope.
Nov 26 12:37:14 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46edef56fadee100eed8d85b0abeb437c25749ff5d8c9c600443c67a45342dc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46edef56fadee100eed8d85b0abeb437c25749ff5d8c9c600443c67a45342dc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46edef56fadee100eed8d85b0abeb437c25749ff5d8c9c600443c67a45342dc7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:14 compute-0 podman[75697]: 2025-11-26 12:37:14.638915968 +0000 UTC m=+0.073831847 container init 87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f (image=quay.io/ceph/ceph:v18, name=priceless_payne, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 26 12:37:14 compute-0 podman[75697]: 2025-11-26 12:37:14.643495157 +0000 UTC m=+0.078411016 container start 87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f (image=quay.io/ceph/ceph:v18, name=priceless_payne, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 12:37:14 compute-0 podman[75697]: 2025-11-26 12:37:14.644604739 +0000 UTC m=+0.079520596 container attach 87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f (image=quay.io/ceph/ceph:v18, name=priceless_payne, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 26 12:37:14 compute-0 podman[75697]: 2025-11-26 12:37:14.580033132 +0000 UTC m=+0.014949000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 12:37:14 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/695035198' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:14 compute-0 priceless_payne[75711]: 
Nov 26 12:37:14 compute-0 priceless_payne[75711]: {
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     "health": {
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "status": "HEALTH_OK",
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "checks": {},
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "mutes": []
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     },
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     "election_epoch": 5,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     "quorum": [
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         0
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     ],
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     "quorum_names": [
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "compute-0"
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     ],
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     "quorum_age": 19,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     "monmap": {
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "epoch": 1,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "min_mon_release_name": "reef",
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "num_mons": 1
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     },
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     "osdmap": {
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "epoch": 1,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "num_osds": 0,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "num_up_osds": 0,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "osd_up_since": 0,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "num_in_osds": 0,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "osd_in_since": 0,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "num_remapped_pgs": 0
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     },
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     "pgmap": {
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "pgs_by_state": [],
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "num_pgs": 0,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "num_pools": 0,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "num_objects": 0,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "data_bytes": 0,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "bytes_used": 0,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "bytes_avail": 0,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "bytes_total": 0
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     },
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     "fsmap": {
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "epoch": 1,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "by_rank": [],
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "up:standby": 0
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     },
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     "mgrmap": {
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "available": false,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "num_standbys": 0,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "modules": [
Nov 26 12:37:14 compute-0 priceless_payne[75711]:             "iostat",
Nov 26 12:37:14 compute-0 priceless_payne[75711]:             "nfs",
Nov 26 12:37:14 compute-0 priceless_payne[75711]:             "restful"
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         ],
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "services": {}
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     },
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     "servicemap": {
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "epoch": 1,
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 12:37:14 compute-0 priceless_payne[75711]:         "services": {}
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     },
Nov 26 12:37:14 compute-0 priceless_payne[75711]:     "progress_events": {}
Nov 26 12:37:14 compute-0 priceless_payne[75711]: }
Nov 26 12:37:14 compute-0 systemd[1]: libpod-87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f.scope: Deactivated successfully.
Nov 26 12:37:14 compute-0 podman[75697]: 2025-11-26 12:37:14.96401459 +0000 UTC m=+0.398930448 container died 87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f (image=quay.io/ceph/ceph:v18, name=priceless_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:37:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-46edef56fadee100eed8d85b0abeb437c25749ff5d8c9c600443c67a45342dc7-merged.mount: Deactivated successfully.
Nov 26 12:37:14 compute-0 podman[75697]: 2025-11-26 12:37:14.9863829 +0000 UTC m=+0.421298759 container remove 87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f (image=quay.io/ceph/ceph:v18, name=priceless_payne, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 26 12:37:14 compute-0 systemd[1]: libpod-conmon-87dfc247728d8940c5032e8e59313de57abb98d2e1d83751fcece4248a10716f.scope: Deactivated successfully.
Nov 26 12:37:15 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.whkbdn(active, since 1.00955s)
Nov 26 12:37:15 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/695035198' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:15 compute-0 ceph-mon[74966]: mgrmap e3: compute-0.whkbdn(active, since 1.00955s)
Nov 26 12:37:16 compute-0 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 12:37:16 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.whkbdn(active, since 2s)
Nov 26 12:37:17 compute-0 podman[75747]: 2025-11-26 12:37:17.027252003 +0000 UTC m=+0.025162456 container create f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0 (image=quay.io/ceph/ceph:v18, name=competent_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 12:37:17 compute-0 systemd[1]: Started libpod-conmon-f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0.scope.
Nov 26 12:37:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8694c675355fd7f294ad05f6c503edcf6f8eae94011d680db385e96b110559a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8694c675355fd7f294ad05f6c503edcf6f8eae94011d680db385e96b110559a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8694c675355fd7f294ad05f6c503edcf6f8eae94011d680db385e96b110559a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:17 compute-0 podman[75747]: 2025-11-26 12:37:17.07970321 +0000 UTC m=+0.077613673 container init f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0 (image=quay.io/ceph/ceph:v18, name=competent_payne, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 26 12:37:17 compute-0 podman[75747]: 2025-11-26 12:37:17.08329641 +0000 UTC m=+0.081206863 container start f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0 (image=quay.io/ceph/ceph:v18, name=competent_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 12:37:17 compute-0 podman[75747]: 2025-11-26 12:37:17.084374101 +0000 UTC m=+0.082284554 container attach f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0 (image=quay.io/ceph/ceph:v18, name=competent_payne, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:37:17 compute-0 podman[75747]: 2025-11-26 12:37:17.017280984 +0000 UTC m=+0.015191457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:17 compute-0 ceph-mon[74966]: mgrmap e4: compute-0.whkbdn(active, since 2s)
Nov 26 12:37:17 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 12:37:17 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511061859' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:17 compute-0 competent_payne[75760]: 
Nov 26 12:37:17 compute-0 competent_payne[75760]: {
Nov 26 12:37:17 compute-0 competent_payne[75760]:     "fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:37:17 compute-0 competent_payne[75760]:     "health": {
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "status": "HEALTH_OK",
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "checks": {},
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "mutes": []
Nov 26 12:37:17 compute-0 competent_payne[75760]:     },
Nov 26 12:37:17 compute-0 competent_payne[75760]:     "election_epoch": 5,
Nov 26 12:37:17 compute-0 competent_payne[75760]:     "quorum": [
Nov 26 12:37:17 compute-0 competent_payne[75760]:         0
Nov 26 12:37:17 compute-0 competent_payne[75760]:     ],
Nov 26 12:37:17 compute-0 competent_payne[75760]:     "quorum_names": [
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "compute-0"
Nov 26 12:37:17 compute-0 competent_payne[75760]:     ],
Nov 26 12:37:17 compute-0 competent_payne[75760]:     "quorum_age": 21,
Nov 26 12:37:17 compute-0 competent_payne[75760]:     "monmap": {
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "epoch": 1,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "min_mon_release_name": "reef",
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "num_mons": 1
Nov 26 12:37:17 compute-0 competent_payne[75760]:     },
Nov 26 12:37:17 compute-0 competent_payne[75760]:     "osdmap": {
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "epoch": 1,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "num_osds": 0,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "num_up_osds": 0,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "osd_up_since": 0,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "num_in_osds": 0,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "osd_in_since": 0,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "num_remapped_pgs": 0
Nov 26 12:37:17 compute-0 competent_payne[75760]:     },
Nov 26 12:37:17 compute-0 competent_payne[75760]:     "pgmap": {
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "pgs_by_state": [],
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "num_pgs": 0,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "num_pools": 0,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "num_objects": 0,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "data_bytes": 0,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "bytes_used": 0,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "bytes_avail": 0,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "bytes_total": 0
Nov 26 12:37:17 compute-0 competent_payne[75760]:     },
Nov 26 12:37:17 compute-0 competent_payne[75760]:     "fsmap": {
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "epoch": 1,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "by_rank": [],
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "up:standby": 0
Nov 26 12:37:17 compute-0 competent_payne[75760]:     },
Nov 26 12:37:17 compute-0 competent_payne[75760]:     "mgrmap": {
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "available": true,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "num_standbys": 0,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "modules": [
Nov 26 12:37:17 compute-0 competent_payne[75760]:             "iostat",
Nov 26 12:37:17 compute-0 competent_payne[75760]:             "nfs",
Nov 26 12:37:17 compute-0 competent_payne[75760]:             "restful"
Nov 26 12:37:17 compute-0 competent_payne[75760]:         ],
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "services": {}
Nov 26 12:37:17 compute-0 competent_payne[75760]:     },
Nov 26 12:37:17 compute-0 competent_payne[75760]:     "servicemap": {
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "epoch": 1,
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "modified": "2025-11-26T12:36:53.922147+0000",
Nov 26 12:37:17 compute-0 competent_payne[75760]:         "services": {}
Nov 26 12:37:17 compute-0 competent_payne[75760]:     },
Nov 26 12:37:17 compute-0 competent_payne[75760]:     "progress_events": {}
Nov 26 12:37:17 compute-0 competent_payne[75760]: }
Nov 26 12:37:17 compute-0 systemd[1]: libpod-f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0.scope: Deactivated successfully.
Nov 26 12:37:17 compute-0 podman[75747]: 2025-11-26 12:37:17.5700463 +0000 UTC m=+0.567956753 container died f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0 (image=quay.io/ceph/ceph:v18, name=competent_payne, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 12:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8694c675355fd7f294ad05f6c503edcf6f8eae94011d680db385e96b110559a-merged.mount: Deactivated successfully.
Nov 26 12:37:17 compute-0 podman[75747]: 2025-11-26 12:37:17.593298096 +0000 UTC m=+0.591208549 container remove f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0 (image=quay.io/ceph/ceph:v18, name=competent_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 12:37:17 compute-0 systemd[1]: libpod-conmon-f70d0f856bd37d8fa52287091c01afd11154489c1ba6ec179ccc5b88b0d235e0.scope: Deactivated successfully.
Nov 26 12:37:17 compute-0 podman[75795]: 2025-11-26 12:37:17.633433755 +0000 UTC m=+0.025947576 container create 698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8 (image=quay.io/ceph/ceph:v18, name=nostalgic_vaughan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:17 compute-0 systemd[1]: Started libpod-conmon-698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8.scope.
Nov 26 12:37:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ca63eb82c7645073d118288833baf8cc072172805f98f9a73f6fe9cd652260/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ca63eb82c7645073d118288833baf8cc072172805f98f9a73f6fe9cd652260/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ca63eb82c7645073d118288833baf8cc072172805f98f9a73f6fe9cd652260/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ca63eb82c7645073d118288833baf8cc072172805f98f9a73f6fe9cd652260/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:17 compute-0 podman[75795]: 2025-11-26 12:37:17.680147681 +0000 UTC m=+0.072661512 container init 698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8 (image=quay.io/ceph/ceph:v18, name=nostalgic_vaughan, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:17 compute-0 podman[75795]: 2025-11-26 12:37:17.684529527 +0000 UTC m=+0.077043348 container start 698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8 (image=quay.io/ceph/ceph:v18, name=nostalgic_vaughan, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 12:37:17 compute-0 podman[75795]: 2025-11-26 12:37:17.685636444 +0000 UTC m=+0.078150275 container attach 698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8 (image=quay.io/ceph/ceph:v18, name=nostalgic_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:17 compute-0 podman[75795]: 2025-11-26 12:37:17.623407542 +0000 UTC m=+0.015921383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 26 12:37:18 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1377714412' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 12:37:18 compute-0 systemd[1]: libpod-698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8.scope: Deactivated successfully.
Nov 26 12:37:18 compute-0 conmon[75810]: conmon 698413d143694e7fd4d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8.scope/container/memory.events
Nov 26 12:37:18 compute-0 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 12:37:18 compute-0 podman[75836]: 2025-11-26 12:37:18.138129762 +0000 UTC m=+0.017065558 container died 698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8 (image=quay.io/ceph/ceph:v18, name=nostalgic_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 12:37:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-55ca63eb82c7645073d118288833baf8cc072172805f98f9a73f6fe9cd652260-merged.mount: Deactivated successfully.
Nov 26 12:37:18 compute-0 podman[75836]: 2025-11-26 12:37:18.1585231 +0000 UTC m=+0.037458896 container remove 698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8 (image=quay.io/ceph/ceph:v18, name=nostalgic_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 12:37:18 compute-0 systemd[1]: libpod-conmon-698413d143694e7fd4d851ed6818e66b81f3a855770debfe9005848be720adb8.scope: Deactivated successfully.
Nov 26 12:37:18 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/511061859' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 12:37:18 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1377714412' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 12:37:18 compute-0 podman[75848]: 2025-11-26 12:37:18.203409392 +0000 UTC m=+0.028662531 container create e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8 (image=quay.io/ceph/ceph:v18, name=admiring_sammet, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:18 compute-0 systemd[1]: Started libpod-conmon-e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8.scope.
Nov 26 12:37:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a269506520bb9c40e073cf3e15c76ad00855e20b1cf89f025bb12205da1e03f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a269506520bb9c40e073cf3e15c76ad00855e20b1cf89f025bb12205da1e03f3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a269506520bb9c40e073cf3e15c76ad00855e20b1cf89f025bb12205da1e03f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:18 compute-0 podman[75848]: 2025-11-26 12:37:18.2553942 +0000 UTC m=+0.080647350 container init e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8 (image=quay.io/ceph/ceph:v18, name=admiring_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 12:37:18 compute-0 podman[75848]: 2025-11-26 12:37:18.259028929 +0000 UTC m=+0.084282069 container start e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8 (image=quay.io/ceph/ceph:v18, name=admiring_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 12:37:18 compute-0 podman[75848]: 2025-11-26 12:37:18.260010388 +0000 UTC m=+0.085263528 container attach e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8 (image=quay.io/ceph/ceph:v18, name=admiring_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 12:37:18 compute-0 podman[75848]: 2025-11-26 12:37:18.191487367 +0000 UTC m=+0.016740527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 26 12:37:18 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/849019992' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 26 12:37:19 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/849019992' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 26 12:37:19 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/849019992' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 26 12:37:19 compute-0 ceph-mgr[75236]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 26 12:37:19 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.whkbdn(active, since 5s)
Nov 26 12:37:19 compute-0 systemd[1]: libpod-e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8.scope: Deactivated successfully.
Nov 26 12:37:19 compute-0 conmon[75861]: conmon e28d76525d5f01d9f39c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8.scope/container/memory.events
Nov 26 12:37:19 compute-0 podman[75887]: 2025-11-26 12:37:19.240655341 +0000 UTC m=+0.014503600 container died e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8 (image=quay.io/ceph/ceph:v18, name=admiring_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 12:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a269506520bb9c40e073cf3e15c76ad00855e20b1cf89f025bb12205da1e03f3-merged.mount: Deactivated successfully.
Nov 26 12:37:19 compute-0 podman[75887]: 2025-11-26 12:37:19.261159679 +0000 UTC m=+0.035007919 container remove e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8 (image=quay.io/ceph/ceph:v18, name=admiring_sammet, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:19 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: ignoring --setuser ceph since I am not root
Nov 26 12:37:19 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: ignoring --setgroup ceph since I am not root
Nov 26 12:37:19 compute-0 systemd[1]: libpod-conmon-e28d76525d5f01d9f39c9e08ec969ebe53bad24c18ea6709add1f312151475f8.scope: Deactivated successfully.
Nov 26 12:37:19 compute-0 ceph-mgr[75236]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 26 12:37:19 compute-0 ceph-mgr[75236]: pidfile_write: ignore empty --pid-file
Nov 26 12:37:19 compute-0 podman[75907]: 2025-11-26 12:37:19.303911028 +0000 UTC m=+0.026031523 container create 8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb (image=quay.io/ceph/ceph:v18, name=eloquent_curie, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:19 compute-0 systemd[1]: Started libpod-conmon-8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb.scope.
Nov 26 12:37:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bff736057b69a0b4d91d92941060042aaa1ab186dc3d1a04ba502f52cee258c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bff736057b69a0b4d91d92941060042aaa1ab186dc3d1a04ba502f52cee258c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bff736057b69a0b4d91d92941060042aaa1ab186dc3d1a04ba502f52cee258c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:19 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'alerts'
Nov 26 12:37:19 compute-0 podman[75907]: 2025-11-26 12:37:19.365967454 +0000 UTC m=+0.088087939 container init 8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb (image=quay.io/ceph/ceph:v18, name=eloquent_curie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 26 12:37:19 compute-0 podman[75907]: 2025-11-26 12:37:19.370991101 +0000 UTC m=+0.093111586 container start 8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb (image=quay.io/ceph/ceph:v18, name=eloquent_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 12:37:19 compute-0 podman[75907]: 2025-11-26 12:37:19.373778934 +0000 UTC m=+0.095899419 container attach 8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb (image=quay.io/ceph/ceph:v18, name=eloquent_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 12:37:19 compute-0 podman[75907]: 2025-11-26 12:37:19.293557118 +0000 UTC m=+0.015677622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:19 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:19.629+0000 7f3615d7b140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 12:37:19 compute-0 ceph-mgr[75236]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 12:37:19 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'balancer'
Nov 26 12:37:19 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 26 12:37:19 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/634867847' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 26 12:37:19 compute-0 eloquent_curie[75937]: {
Nov 26 12:37:19 compute-0 eloquent_curie[75937]:     "epoch": 5,
Nov 26 12:37:19 compute-0 eloquent_curie[75937]:     "available": true,
Nov 26 12:37:19 compute-0 eloquent_curie[75937]:     "active_name": "compute-0.whkbdn",
Nov 26 12:37:19 compute-0 eloquent_curie[75937]:     "num_standby": 0
Nov 26 12:37:19 compute-0 eloquent_curie[75937]: }
Nov 26 12:37:19 compute-0 systemd[1]: libpod-8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb.scope: Deactivated successfully.
Nov 26 12:37:19 compute-0 conmon[75937]: conmon 8b91ed3daf8912828f36 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb.scope/container/memory.events
Nov 26 12:37:19 compute-0 podman[75907]: 2025-11-26 12:37:19.842017562 +0000 UTC m=+0.564138048 container died 8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb (image=quay.io/ceph/ceph:v18, name=eloquent_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 12:37:19 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:19.854+0000 7f3615d7b140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 12:37:19 compute-0 ceph-mgr[75236]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 12:37:19 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'cephadm'
Nov 26 12:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bff736057b69a0b4d91d92941060042aaa1ab186dc3d1a04ba502f52cee258c-merged.mount: Deactivated successfully.
Nov 26 12:37:19 compute-0 podman[75907]: 2025-11-26 12:37:19.870506405 +0000 UTC m=+0.592626891 container remove 8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb (image=quay.io/ceph/ceph:v18, name=eloquent_curie, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 12:37:19 compute-0 systemd[1]: libpod-conmon-8b91ed3daf8912828f36811b9d69f310b6917e42b87ffe8797d6ee356f0f65fb.scope: Deactivated successfully.
Nov 26 12:37:19 compute-0 podman[75972]: 2025-11-26 12:37:19.910179584 +0000 UTC m=+0.027050855 container create 56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f (image=quay.io/ceph/ceph:v18, name=amazing_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 12:37:19 compute-0 systemd[1]: Started libpod-conmon-56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f.scope.
Nov 26 12:37:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88115fa611202d3d6546a0d20f181205f4afff689ccea602116b438f1ebe3644/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88115fa611202d3d6546a0d20f181205f4afff689ccea602116b438f1ebe3644/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88115fa611202d3d6546a0d20f181205f4afff689ccea602116b438f1ebe3644/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:19 compute-0 podman[75972]: 2025-11-26 12:37:19.95549284 +0000 UTC m=+0.072364111 container init 56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f (image=quay.io/ceph/ceph:v18, name=amazing_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:19 compute-0 podman[75972]: 2025-11-26 12:37:19.959829392 +0000 UTC m=+0.076700652 container start 56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f (image=quay.io/ceph/ceph:v18, name=amazing_kare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:37:19 compute-0 podman[75972]: 2025-11-26 12:37:19.960928032 +0000 UTC m=+0.077799293 container attach 56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f (image=quay.io/ceph/ceph:v18, name=amazing_kare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:37:19 compute-0 podman[75972]: 2025-11-26 12:37:19.900297161 +0000 UTC m=+0.017168432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:20 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/849019992' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 26 12:37:20 compute-0 ceph-mon[74966]: mgrmap e5: compute-0.whkbdn(active, since 5s)
Nov 26 12:37:20 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/634867847' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 26 12:37:21 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'crash'
Nov 26 12:37:21 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:21.717+0000 7f3615d7b140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 12:37:21 compute-0 ceph-mgr[75236]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 12:37:21 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'dashboard'
Nov 26 12:37:22 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'devicehealth'
Nov 26 12:37:23 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:23.133+0000 7f3615d7b140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 12:37:23 compute-0 ceph-mgr[75236]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 12:37:23 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'diskprediction_local'
Nov 26 12:37:23 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 26 12:37:23 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 26 12:37:23 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]:   from numpy import show_config as show_numpy_config
Nov 26 12:37:23 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:23.586+0000 7f3615d7b140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 12:37:23 compute-0 ceph-mgr[75236]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 12:37:23 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'influx'
Nov 26 12:37:23 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:23.793+0000 7f3615d7b140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 12:37:23 compute-0 ceph-mgr[75236]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 12:37:23 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'insights'
Nov 26 12:37:24 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'iostat'
Nov 26 12:37:24 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:24.203+0000 7f3615d7b140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 12:37:24 compute-0 ceph-mgr[75236]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 12:37:24 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'k8sevents'
Nov 26 12:37:25 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'localpool'
Nov 26 12:37:25 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'mds_autoscaler'
Nov 26 12:37:26 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'mirroring'
Nov 26 12:37:26 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'nfs'
Nov 26 12:37:27 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:27.271+0000 7f3615d7b140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 12:37:27 compute-0 ceph-mgr[75236]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 12:37:27 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'orchestrator'
Nov 26 12:37:27 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:27.844+0000 7f3615d7b140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 12:37:27 compute-0 ceph-mgr[75236]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 12:37:27 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'osd_perf_query'
Nov 26 12:37:28 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:28.074+0000 7f3615d7b140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 12:37:28 compute-0 ceph-mgr[75236]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 12:37:28 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'osd_support'
Nov 26 12:37:28 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:28.277+0000 7f3615d7b140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 12:37:28 compute-0 ceph-mgr[75236]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 12:37:28 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'pg_autoscaler'
Nov 26 12:37:28 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:28.511+0000 7f3615d7b140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 12:37:28 compute-0 ceph-mgr[75236]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 12:37:28 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'progress'
Nov 26 12:37:28 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:28.721+0000 7f3615d7b140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 12:37:28 compute-0 ceph-mgr[75236]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 12:37:28 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'prometheus'
Nov 26 12:37:29 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:29.592+0000 7f3615d7b140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 12:37:29 compute-0 ceph-mgr[75236]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 12:37:29 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'rbd_support'
Nov 26 12:37:29 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:29.852+0000 7f3615d7b140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 12:37:29 compute-0 ceph-mgr[75236]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 12:37:29 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'restful'
Nov 26 12:37:30 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'rgw'
Nov 26 12:37:31 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:31.072+0000 7f3615d7b140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 12:37:31 compute-0 ceph-mgr[75236]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 12:37:31 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'rook'
Nov 26 12:37:32 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:32.844+0000 7f3615d7b140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 12:37:32 compute-0 ceph-mgr[75236]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 12:37:32 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'selftest'
Nov 26 12:37:33 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:33.055+0000 7f3615d7b140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 12:37:33 compute-0 ceph-mgr[75236]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 12:37:33 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'snap_schedule'
Nov 26 12:37:33 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:33.271+0000 7f3615d7b140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 12:37:33 compute-0 ceph-mgr[75236]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 12:37:33 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'stats'
Nov 26 12:37:33 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'status'
Nov 26 12:37:33 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:33.712+0000 7f3615d7b140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 12:37:33 compute-0 ceph-mgr[75236]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 12:37:33 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'telegraf'
Nov 26 12:37:33 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:33.917+0000 7f3615d7b140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 12:37:33 compute-0 ceph-mgr[75236]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 12:37:33 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'telemetry'
Nov 26 12:37:34 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:34.429+0000 7f3615d7b140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 12:37:34 compute-0 ceph-mgr[75236]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 12:37:34 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'test_orchestrator'
Nov 26 12:37:35 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:35.002+0000 7f3615d7b140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'volumes'
Nov 26 12:37:35 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:35.618+0000 7f3615d7b140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr[py] Loading python module 'zabbix'
Nov 26 12:37:35 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:37:35.826+0000 7f3615d7b140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : Active manager daemon compute-0.whkbdn restarted
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.whkbdn
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: ms_deliver_dispatch: unhandled message 0x5583b00c11e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr handle_mgr_map Activating!
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr handle_mgr_map I am now activating
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.whkbdn(active, starting, since 0.00689892s)
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.whkbdn", "id": "compute-0.whkbdn"} v 0) v1
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mgr metadata", "who": "compute-0.whkbdn", "id": "compute-0.whkbdn"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).mds e1 all = 1
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: balancer
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : Manager daemon compute-0.whkbdn is now available
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Starting
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:37:35
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [balancer INFO root] No pools available
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: cephadm
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: crash
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: devicehealth
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: iostat
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [devicehealth INFO root] Starting
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: nfs
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: orchestrator
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: pg_autoscaler
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: progress
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [progress INFO root] Loading...
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [progress INFO root] No stored events to load
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [progress INFO root] Loaded [] historic events
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [progress INFO root] Loaded OSDMap, ready.
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] recovery thread starting
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] starting setup
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: rbd_support
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: restful
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: status
Nov 26 12:37:35 compute-0 ceph-mon[74966]: Active manager daemon compute-0.whkbdn restarted
Nov 26 12:37:35 compute-0 ceph-mon[74966]: Activating manager daemon compute-0.whkbdn
Nov 26 12:37:35 compute-0 ceph-mon[74966]: osdmap e2: 0 total, 0 up, 0 in
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mgrmap e6: compute-0.whkbdn(active, starting, since 0.00689892s)
Nov 26 12:37:35 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mgr metadata", "who": "compute-0.whkbdn", "id": "compute-0.whkbdn"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mon[74966]: Manager daemon compute-0.whkbdn is now available
Nov 26 12:37:35 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:35 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:35 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: telemetry
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [restful INFO root] server_addr: :: server_port: 8003
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [restful WARNING root] server not running: no certificate configured
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/mirror_snapshot_schedule"} v 0) v1
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/mirror_snapshot_schedule"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] PerfHandler: starting
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TaskHandler: starting
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/trash_purge_schedule"} v 0) v1
Nov 26 12:37:35 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/trash_purge_schedule"}]: dispatch
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] setup complete
Nov 26 12:37:35 compute-0 ceph-mgr[75236]: mgr load Constructed class from module: volumes
Nov 26 12:37:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019936638 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:37:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 26 12:37:36 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 26 12:37:36 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:36 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.whkbdn(active, since 1.00943s)
Nov 26 12:37:36 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 26 12:37:36 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 26 12:37:36 compute-0 amazing_kare[75986]: {
Nov 26 12:37:36 compute-0 amazing_kare[75986]:     "mgrmap_epoch": 7,
Nov 26 12:37:36 compute-0 amazing_kare[75986]:     "initialized": true
Nov 26 12:37:36 compute-0 amazing_kare[75986]: }
Nov 26 12:37:36 compute-0 podman[75972]: 2025-11-26 12:37:36.856098349 +0000 UTC m=+16.972969611 container died 56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f (image=quay.io/ceph/ceph:v18, name=amazing_kare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:36 compute-0 systemd[1]: libpod-56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f.scope: Deactivated successfully.
Nov 26 12:37:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-88115fa611202d3d6546a0d20f181205f4afff689ccea602116b438f1ebe3644-merged.mount: Deactivated successfully.
Nov 26 12:37:36 compute-0 podman[75972]: 2025-11-26 12:37:36.880599808 +0000 UTC m=+16.997471070 container remove 56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f (image=quay.io/ceph/ceph:v18, name=amazing_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:36 compute-0 ceph-mon[74966]: Found migration_current of "None". Setting to last migration.
Nov 26 12:37:36 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/mirror_snapshot_schedule"}]: dispatch
Nov 26 12:37:36 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.whkbdn/trash_purge_schedule"}]: dispatch
Nov 26 12:37:36 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:36 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:36 compute-0 ceph-mon[74966]: mgrmap e7: compute-0.whkbdn(active, since 1.00943s)
Nov 26 12:37:36 compute-0 systemd[1]: libpod-conmon-56a720ad4777b8bc048165d6b6bdde6449dd86181c96c76df3543b6c3d0c5d0f.scope: Deactivated successfully.
Nov 26 12:37:36 compute-0 podman[76141]: 2025-11-26 12:37:36.922630089 +0000 UTC m=+0.026777158 container create a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65 (image=quay.io/ceph/ceph:v18, name=peaceful_haslett, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 12:37:36 compute-0 systemd[1]: Started libpod-conmon-a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65.scope.
Nov 26 12:37:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7614b72c1034d523babd8a557c6ed27644b283095f08209071a29e60fa329238/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7614b72c1034d523babd8a557c6ed27644b283095f08209071a29e60fa329238/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7614b72c1034d523babd8a557c6ed27644b283095f08209071a29e60fa329238/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:36 compute-0 podman[76141]: 2025-11-26 12:37:36.979537133 +0000 UTC m=+0.083684201 container init a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65 (image=quay.io/ceph/ceph:v18, name=peaceful_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 12:37:36 compute-0 podman[76141]: 2025-11-26 12:37:36.983826083 +0000 UTC m=+0.087973153 container start a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65 (image=quay.io/ceph/ceph:v18, name=peaceful_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 12:37:36 compute-0 podman[76141]: 2025-11-26 12:37:36.985044901 +0000 UTC m=+0.089191970 container attach a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65 (image=quay.io/ceph/ceph:v18, name=peaceful_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:37 compute-0 podman[76141]: 2025-11-26 12:37:36.912097061 +0000 UTC m=+0.016244130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:37 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:37 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 26 12:37:37 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:37 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 12:37:37 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 12:37:37 compute-0 systemd[1]: libpod-a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65.scope: Deactivated successfully.
Nov 26 12:37:37 compute-0 podman[76183]: 2025-11-26 12:37:37.469960193 +0000 UTC m=+0.018724554 container died a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65 (image=quay.io/ceph/ceph:v18, name=peaceful_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 12:37:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7614b72c1034d523babd8a557c6ed27644b283095f08209071a29e60fa329238-merged.mount: Deactivated successfully.
Nov 26 12:37:37 compute-0 podman[76183]: 2025-11-26 12:37:37.48938193 +0000 UTC m=+0.038146272 container remove a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65 (image=quay.io/ceph/ceph:v18, name=peaceful_haslett, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 12:37:37 compute-0 systemd[1]: libpod-conmon-a8188537d3cbfa3f4460f0466a96bd01ef105d9b764f60993c03062ecadbae65.scope: Deactivated successfully.
Nov 26 12:37:37 compute-0 podman[76194]: 2025-11-26 12:37:37.529250678 +0000 UTC m=+0.024299549 container create d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308 (image=quay.io/ceph/ceph:v18, name=determined_galois, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:37 compute-0 systemd[1]: Started libpod-conmon-d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308.scope.
Nov 26 12:37:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb45ca3bfdd67d3619f2b4d0d88a631fecc3bbd0bbe6375035364a2da445fee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb45ca3bfdd67d3619f2b4d0d88a631fecc3bbd0bbe6375035364a2da445fee/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb45ca3bfdd67d3619f2b4d0d88a631fecc3bbd0bbe6375035364a2da445fee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:37 compute-0 podman[76194]: 2025-11-26 12:37:37.588825658 +0000 UTC m=+0.083874549 container init d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308 (image=quay.io/ceph/ceph:v18, name=determined_galois, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 12:37:37 compute-0 podman[76194]: 2025-11-26 12:37:37.592395795 +0000 UTC m=+0.087444667 container start d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308 (image=quay.io/ceph/ceph:v18, name=determined_galois, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 12:37:37 compute-0 podman[76194]: 2025-11-26 12:37:37.596077354 +0000 UTC m=+0.091126224 container attach d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308 (image=quay.io/ceph/ceph:v18, name=determined_galois, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 12:37:37 compute-0 podman[76194]: 2025-11-26 12:37:37.519381711 +0000 UTC m=+0.014430612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:37 compute-0 ceph-mgr[75236]: [cephadm INFO cherrypy.error] [26/Nov/2025:12:37:37] ENGINE Bus STARTING
Nov 26 12:37:37 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : [26/Nov/2025:12:37:37] ENGINE Bus STARTING
Nov 26 12:37:37 compute-0 ceph-mgr[75236]: [cephadm INFO cherrypy.error] [26/Nov/2025:12:37:37] ENGINE Serving on https://192.168.122.100:7150
Nov 26 12:37:37 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : [26/Nov/2025:12:37:37] ENGINE Serving on https://192.168.122.100:7150
Nov 26 12:37:37 compute-0 ceph-mgr[75236]: [cephadm INFO cherrypy.error] [26/Nov/2025:12:37:37] ENGINE Client ('192.168.122.100', 59656) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 26 12:37:37 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : [26/Nov/2025:12:37:37] ENGINE Client ('192.168.122.100', 59656) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 26 12:37:37 compute-0 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 12:37:37 compute-0 ceph-mgr[75236]: [cephadm INFO cherrypy.error] [26/Nov/2025:12:37:37] ENGINE Serving on http://192.168.122.100:8765
Nov 26 12:37:37 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : [26/Nov/2025:12:37:37] ENGINE Serving on http://192.168.122.100:8765
Nov 26 12:37:37 compute-0 ceph-mgr[75236]: [cephadm INFO cherrypy.error] [26/Nov/2025:12:37:37] ENGINE Bus STARTED
Nov 26 12:37:37 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : [26/Nov/2025:12:37:37] ENGINE Bus STARTED
Nov 26 12:37:37 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 12:37:37 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 12:37:37 compute-0 ceph-mon[74966]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 26 12:37:37 compute-0 ceph-mon[74966]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 26 12:37:37 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:37 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 12:37:37 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 12:37:38 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:38 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 26 12:37:38 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:38 compute-0 ceph-mgr[75236]: [cephadm INFO root] Set ssh ssh_user
Nov 26 12:37:38 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 26 12:37:38 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 26 12:37:38 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:38 compute-0 ceph-mgr[75236]: [cephadm INFO root] Set ssh ssh_config
Nov 26 12:37:38 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 26 12:37:38 compute-0 ceph-mgr[75236]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 26 12:37:38 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 26 12:37:38 compute-0 determined_galois[76207]: ssh user set to ceph-admin. sudo will be used
Nov 26 12:37:38 compute-0 systemd[1]: libpod-d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308.scope: Deactivated successfully.
Nov 26 12:37:38 compute-0 conmon[76207]: conmon d5ef34add8e5251bd59c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308.scope/container/memory.events
Nov 26 12:37:38 compute-0 podman[76256]: 2025-11-26 12:37:38.057292728 +0000 UTC m=+0.015778725 container died d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308 (image=quay.io/ceph/ceph:v18, name=determined_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 12:37:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bb45ca3bfdd67d3619f2b4d0d88a631fecc3bbd0bbe6375035364a2da445fee-merged.mount: Deactivated successfully.
Nov 26 12:37:38 compute-0 podman[76256]: 2025-11-26 12:37:38.076466727 +0000 UTC m=+0.034952714 container remove d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308 (image=quay.io/ceph/ceph:v18, name=determined_galois, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 12:37:38 compute-0 systemd[1]: libpod-conmon-d5ef34add8e5251bd59c065b3fa2e46a4f41f10627102ea13477648eb72bb308.scope: Deactivated successfully.
Nov 26 12:37:38 compute-0 podman[76268]: 2025-11-26 12:37:38.119907104 +0000 UTC m=+0.027294554 container create 570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95 (image=quay.io/ceph/ceph:v18, name=frosty_khayyam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 26 12:37:38 compute-0 systemd[1]: Started libpod-conmon-570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95.scope.
Nov 26 12:37:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6322b1f44afc6bf0e0e7fed1595d763c1ac5284546fc66e9b5fa9a3f25df98d7/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6322b1f44afc6bf0e0e7fed1595d763c1ac5284546fc66e9b5fa9a3f25df98d7/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6322b1f44afc6bf0e0e7fed1595d763c1ac5284546fc66e9b5fa9a3f25df98d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6322b1f44afc6bf0e0e7fed1595d763c1ac5284546fc66e9b5fa9a3f25df98d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6322b1f44afc6bf0e0e7fed1595d763c1ac5284546fc66e9b5fa9a3f25df98d7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:38 compute-0 podman[76268]: 2025-11-26 12:37:38.170825713 +0000 UTC m=+0.078213163 container init 570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95 (image=quay.io/ceph/ceph:v18, name=frosty_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 12:37:38 compute-0 podman[76268]: 2025-11-26 12:37:38.175219052 +0000 UTC m=+0.082606501 container start 570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95 (image=quay.io/ceph/ceph:v18, name=frosty_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 12:37:38 compute-0 podman[76268]: 2025-11-26 12:37:38.176322681 +0000 UTC m=+0.083710131 container attach 570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95 (image=quay.io/ceph/ceph:v18, name=frosty_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:38 compute-0 podman[76268]: 2025-11-26 12:37:38.108024423 +0000 UTC m=+0.015411873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:38 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.whkbdn(active, since 2s)
Nov 26 12:37:38 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:38 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 26 12:37:38 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:38 compute-0 ceph-mgr[75236]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 26 12:37:38 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 26 12:37:38 compute-0 ceph-mgr[75236]: [cephadm INFO root] Set ssh private key
Nov 26 12:37:38 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 26 12:37:38 compute-0 systemd[1]: libpod-570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95.scope: Deactivated successfully.
Nov 26 12:37:38 compute-0 conmon[76281]: conmon 570e1654906cb8d6d8ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95.scope/container/memory.events
Nov 26 12:37:38 compute-0 podman[76268]: 2025-11-26 12:37:38.613232806 +0000 UTC m=+0.520620255 container died 570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95 (image=quay.io/ceph/ceph:v18, name=frosty_khayyam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 12:37:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6322b1f44afc6bf0e0e7fed1595d763c1ac5284546fc66e9b5fa9a3f25df98d7-merged.mount: Deactivated successfully.
Nov 26 12:37:38 compute-0 podman[76268]: 2025-11-26 12:37:38.634091971 +0000 UTC m=+0.541479421 container remove 570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95 (image=quay.io/ceph/ceph:v18, name=frosty_khayyam, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:38 compute-0 systemd[1]: libpod-conmon-570e1654906cb8d6d8ed59b2c3be09213e88d85273ead3c82108e847c09c7d95.scope: Deactivated successfully.
Nov 26 12:37:38 compute-0 podman[76314]: 2025-11-26 12:37:38.672991903 +0000 UTC m=+0.026662572 container create 4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f (image=quay.io/ceph/ceph:v18, name=determined_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 12:37:38 compute-0 systemd[1]: Started libpod-conmon-4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f.scope.
Nov 26 12:37:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0686c22330c4b484cb4ad8b55ab60ede749ae1b6b3f4859484f9750a07a866/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0686c22330c4b484cb4ad8b55ab60ede749ae1b6b3f4859484f9750a07a866/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0686c22330c4b484cb4ad8b55ab60ede749ae1b6b3f4859484f9750a07a866/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0686c22330c4b484cb4ad8b55ab60ede749ae1b6b3f4859484f9750a07a866/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0686c22330c4b484cb4ad8b55ab60ede749ae1b6b3f4859484f9750a07a866/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:38 compute-0 podman[76314]: 2025-11-26 12:37:38.733609889 +0000 UTC m=+0.087280568 container init 4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f (image=quay.io/ceph/ceph:v18, name=determined_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 12:37:38 compute-0 podman[76314]: 2025-11-26 12:37:38.739202668 +0000 UTC m=+0.092873337 container start 4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f (image=quay.io/ceph/ceph:v18, name=determined_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:38 compute-0 podman[76314]: 2025-11-26 12:37:38.740375067 +0000 UTC m=+0.094045736 container attach 4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f (image=quay.io/ceph/ceph:v18, name=determined_ritchie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:38 compute-0 podman[76314]: 2025-11-26 12:37:38.662044554 +0000 UTC m=+0.015715243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:39 compute-0 ceph-mon[74966]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:39 compute-0 ceph-mon[74966]: [26/Nov/2025:12:37:37] ENGINE Bus STARTING
Nov 26 12:37:39 compute-0 ceph-mon[74966]: [26/Nov/2025:12:37:37] ENGINE Serving on https://192.168.122.100:7150
Nov 26 12:37:39 compute-0 ceph-mon[74966]: [26/Nov/2025:12:37:37] ENGINE Client ('192.168.122.100', 59656) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 26 12:37:39 compute-0 ceph-mon[74966]: [26/Nov/2025:12:37:37] ENGINE Serving on http://192.168.122.100:8765
Nov 26 12:37:39 compute-0 ceph-mon[74966]: [26/Nov/2025:12:37:37] ENGINE Bus STARTED
Nov 26 12:37:39 compute-0 ceph-mon[74966]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:39 compute-0 ceph-mon[74966]: Set ssh ssh_user
Nov 26 12:37:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:39 compute-0 ceph-mon[74966]: Set ssh ssh_config
Nov 26 12:37:39 compute-0 ceph-mon[74966]: ssh user set to ceph-admin. sudo will be used
Nov 26 12:37:39 compute-0 ceph-mon[74966]: mgrmap e8: compute-0.whkbdn(active, since 2s)
Nov 26 12:37:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:39 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:39 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 26 12:37:39 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:39 compute-0 ceph-mgr[75236]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 26 12:37:39 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 26 12:37:39 compute-0 systemd[1]: libpod-4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f.scope: Deactivated successfully.
Nov 26 12:37:39 compute-0 podman[76355]: 2025-11-26 12:37:39.202316359 +0000 UTC m=+0.015765509 container died 4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f (image=quay.io/ceph/ceph:v18, name=determined_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef0686c22330c4b484cb4ad8b55ab60ede749ae1b6b3f4859484f9750a07a866-merged.mount: Deactivated successfully.
Nov 26 12:37:39 compute-0 podman[76355]: 2025-11-26 12:37:39.223294148 +0000 UTC m=+0.036743300 container remove 4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f (image=quay.io/ceph/ceph:v18, name=determined_ritchie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:39 compute-0 systemd[1]: libpod-conmon-4678fb52268e854774346d060177f2442d72d3b821982d3b78d53c97ba046b8f.scope: Deactivated successfully.
Nov 26 12:37:39 compute-0 podman[76366]: 2025-11-26 12:37:39.264578712 +0000 UTC m=+0.024920650 container create 7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43 (image=quay.io/ceph/ceph:v18, name=beautiful_sanderson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:37:39 compute-0 systemd[1]: Started libpod-conmon-7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43.scope.
Nov 26 12:37:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad760954371d96a888c9e50d061e4f2a1d169cf55271638034217dfa6b2439c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad760954371d96a888c9e50d061e4f2a1d169cf55271638034217dfa6b2439c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad760954371d96a888c9e50d061e4f2a1d169cf55271638034217dfa6b2439c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:39 compute-0 podman[76366]: 2025-11-26 12:37:39.317521395 +0000 UTC m=+0.077863323 container init 7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43 (image=quay.io/ceph/ceph:v18, name=beautiful_sanderson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:39 compute-0 podman[76366]: 2025-11-26 12:37:39.32185931 +0000 UTC m=+0.082201238 container start 7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43 (image=quay.io/ceph/ceph:v18, name=beautiful_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:39 compute-0 podman[76366]: 2025-11-26 12:37:39.322949153 +0000 UTC m=+0.083291091 container attach 7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43 (image=quay.io/ceph/ceph:v18, name=beautiful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 12:37:39 compute-0 podman[76366]: 2025-11-26 12:37:39.254530488 +0000 UTC m=+0.014872426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:39 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:39 compute-0 beautiful_sanderson[76379]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvzqcp66TiEKSJJXsCaM6pZVOjHowqaUwAacUNTuScNATnMclQNqJrFrKVDP5+ItZDGhKMg3QDhg15mF5ocMZkHASuETlMwuh9zgM5uBkTrVc6LQV4JpGbxJinCHqGe8PCuCaMFuwuJRLOsLe7inLXSzwXspd3jy9Udf9SYAtv83h9Rv4wzeNYpq7na5kMENHl4CegUrA4RCybyBeFdjje+D+XMFDI3INOocL3r6CpO3AWqzcq8jYiHNSSQ1KsCYNzA+9gjEpIZjPIYJ+h7yttsGh19F+AbPo9b9kckfAb2xJetlN5Kpgqdj047LKyY/fJNDKzP8/FGutWbvR3uF3/6c5UoVhhBmYzRuSX7+TFVWlwfPguFRplhlyehjUXcZEGh7Ci9SfjV+mJ4IVxh1S4wHbUGYtxYhY6bJkNZKEXs1nHHuy3z0PkcYt1FP0QIBqdGKzkXm/HUSN5E71JuSkP69bP+TA6sa0d36RvtdFh/G93Ywekdm+NQi+Wo6QR+G8= zuul@controller
Nov 26 12:37:39 compute-0 systemd[1]: libpod-7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43.scope: Deactivated successfully.
Nov 26 12:37:39 compute-0 podman[76366]: 2025-11-26 12:37:39.752060882 +0000 UTC m=+0.512402810 container died 7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43 (image=quay.io/ceph/ceph:v18, name=beautiful_sanderson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-aad760954371d96a888c9e50d061e4f2a1d169cf55271638034217dfa6b2439c-merged.mount: Deactivated successfully.
Nov 26 12:37:39 compute-0 podman[76366]: 2025-11-26 12:37:39.773925573 +0000 UTC m=+0.534267501 container remove 7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43 (image=quay.io/ceph/ceph:v18, name=beautiful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 12:37:39 compute-0 systemd[1]: libpod-conmon-7279d42ab0c0563bb9dea345458998e3f714c22efa54fd00a4859c877921fc43.scope: Deactivated successfully.
Nov 26 12:37:39 compute-0 podman[76414]: 2025-11-26 12:37:39.812976629 +0000 UTC m=+0.025007024 container create 237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295 (image=quay.io/ceph/ceph:v18, name=amazing_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:39 compute-0 systemd[1]: Started libpod-conmon-237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295.scope.
Nov 26 12:37:39 compute-0 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 12:37:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87b6d6a6efd90f6912e12787bed0b0cbbf220343cfa176e3b57559cf85179816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87b6d6a6efd90f6912e12787bed0b0cbbf220343cfa176e3b57559cf85179816/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87b6d6a6efd90f6912e12787bed0b0cbbf220343cfa176e3b57559cf85179816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:39 compute-0 podman[76414]: 2025-11-26 12:37:39.86009136 +0000 UTC m=+0.072121745 container init 237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295 (image=quay.io/ceph/ceph:v18, name=amazing_bouman, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 12:37:39 compute-0 podman[76414]: 2025-11-26 12:37:39.864594646 +0000 UTC m=+0.076625041 container start 237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295 (image=quay.io/ceph/ceph:v18, name=amazing_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 12:37:39 compute-0 podman[76414]: 2025-11-26 12:37:39.865592506 +0000 UTC m=+0.077622911 container attach 237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295 (image=quay.io/ceph/ceph:v18, name=amazing_bouman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 12:37:39 compute-0 podman[76414]: 2025-11-26 12:37:39.802855136 +0000 UTC m=+0.014885542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:40 compute-0 ceph-mon[74966]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:40 compute-0 ceph-mon[74966]: Set ssh ssh_identity_key
Nov 26 12:37:40 compute-0 ceph-mon[74966]: Set ssh private key
Nov 26 12:37:40 compute-0 ceph-mon[74966]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:40 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:40 compute-0 ceph-mon[74966]: Set ssh ssh_identity_pub
Nov 26 12:37:40 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:40 compute-0 sshd-session[76453]: Accepted publickey for ceph-admin from 192.168.122.100 port 39956 ssh2: RSA SHA256:u+oi91Se3Z6qNLfJgM2if+islPXdtJdild13071S1x0
Nov 26 12:37:40 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 26 12:37:40 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 26 12:37:40 compute-0 systemd-logind[777]: New session 20 of user ceph-admin.
Nov 26 12:37:40 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 26 12:37:40 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 26 12:37:40 compute-0 systemd[76457]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 12:37:40 compute-0 systemd[76457]: Queued start job for default target Main User Target.
Nov 26 12:37:40 compute-0 systemd[76457]: Created slice User Application Slice.
Nov 26 12:37:40 compute-0 systemd[76457]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 26 12:37:40 compute-0 systemd[76457]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 12:37:40 compute-0 systemd[76457]: Reached target Paths.
Nov 26 12:37:40 compute-0 systemd[76457]: Reached target Timers.
Nov 26 12:37:40 compute-0 systemd[76457]: Starting D-Bus User Message Bus Socket...
Nov 26 12:37:40 compute-0 systemd[76457]: Starting Create User's Volatile Files and Directories...
Nov 26 12:37:40 compute-0 systemd[76457]: Finished Create User's Volatile Files and Directories.
Nov 26 12:37:40 compute-0 systemd[76457]: Listening on D-Bus User Message Bus Socket.
Nov 26 12:37:40 compute-0 systemd[76457]: Reached target Sockets.
Nov 26 12:37:40 compute-0 systemd[76457]: Reached target Basic System.
Nov 26 12:37:40 compute-0 systemd[76457]: Reached target Main User Target.
Nov 26 12:37:40 compute-0 systemd[76457]: Startup finished in 93ms.
Nov 26 12:37:40 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 26 12:37:40 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Nov 26 12:37:40 compute-0 sshd-session[76453]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 12:37:40 compute-0 sshd-session[76471]: Accepted publickey for ceph-admin from 192.168.122.100 port 39966 ssh2: RSA SHA256:u+oi91Se3Z6qNLfJgM2if+islPXdtJdild13071S1x0
Nov 26 12:37:40 compute-0 systemd-logind[777]: New session 22 of user ceph-admin.
Nov 26 12:37:40 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Nov 26 12:37:40 compute-0 sshd-session[76471]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 12:37:40 compute-0 sudo[76478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:40 compute-0 sudo[76478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:40 compute-0 sudo[76478]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:40 compute-0 sudo[76503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:40 compute-0 sudo[76503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:40 compute-0 sudo[76503]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053230 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:37:40 compute-0 sshd-session[76528]: Accepted publickey for ceph-admin from 192.168.122.100 port 39978 ssh2: RSA SHA256:u+oi91Se3Z6qNLfJgM2if+islPXdtJdild13071S1x0
Nov 26 12:37:40 compute-0 systemd-logind[777]: New session 23 of user ceph-admin.
Nov 26 12:37:40 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Nov 26 12:37:40 compute-0 sshd-session[76528]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 12:37:41 compute-0 sudo[76532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:41 compute-0 sudo[76532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:41 compute-0 sudo[76532]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:41 compute-0 sudo[76557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 26 12:37:41 compute-0 sudo[76557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:41 compute-0 sudo[76557]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:41 compute-0 ceph-mon[74966]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:41 compute-0 sshd-session[76582]: Accepted publickey for ceph-admin from 192.168.122.100 port 43830 ssh2: RSA SHA256:u+oi91Se3Z6qNLfJgM2if+islPXdtJdild13071S1x0
Nov 26 12:37:41 compute-0 systemd-logind[777]: New session 24 of user ceph-admin.
Nov 26 12:37:41 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Nov 26 12:37:41 compute-0 sshd-session[76582]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 12:37:41 compute-0 sudo[76586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:41 compute-0 sudo[76586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:41 compute-0 sudo[76586]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:41 compute-0 sudo[76611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 26 12:37:41 compute-0 sudo[76611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:41 compute-0 sudo[76611]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:41 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 26 12:37:41 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 26 12:37:41 compute-0 sshd-session[76636]: Accepted publickey for ceph-admin from 192.168.122.100 port 43832 ssh2: RSA SHA256:u+oi91Se3Z6qNLfJgM2if+islPXdtJdild13071S1x0
Nov 26 12:37:41 compute-0 systemd-logind[777]: New session 25 of user ceph-admin.
Nov 26 12:37:41 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Nov 26 12:37:41 compute-0 sshd-session[76636]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 12:37:41 compute-0 sudo[76640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:41 compute-0 sudo[76640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:41 compute-0 sudo[76640]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:41 compute-0 sudo[76665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:37:41 compute-0 sudo[76665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:41 compute-0 sudo[76665]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:41 compute-0 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 12:37:41 compute-0 sshd-session[76690]: Accepted publickey for ceph-admin from 192.168.122.100 port 43840 ssh2: RSA SHA256:u+oi91Se3Z6qNLfJgM2if+islPXdtJdild13071S1x0
Nov 26 12:37:41 compute-0 systemd-logind[777]: New session 26 of user ceph-admin.
Nov 26 12:37:41 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Nov 26 12:37:41 compute-0 sshd-session[76690]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 12:37:41 compute-0 sudo[76694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:41 compute-0 sudo[76694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:41 compute-0 sudo[76694]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:42 compute-0 sudo[76719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:37:42 compute-0 sudo[76719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:42 compute-0 sudo[76719]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:42 compute-0 ceph-mon[74966]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:42 compute-0 sshd-session[76744]: Accepted publickey for ceph-admin from 192.168.122.100 port 43844 ssh2: RSA SHA256:u+oi91Se3Z6qNLfJgM2if+islPXdtJdild13071S1x0
Nov 26 12:37:42 compute-0 systemd-logind[777]: New session 27 of user ceph-admin.
Nov 26 12:37:42 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Nov 26 12:37:42 compute-0 sshd-session[76744]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 12:37:42 compute-0 sudo[76748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:42 compute-0 sudo[76748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:42 compute-0 sudo[76748]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:42 compute-0 sudo[76773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 26 12:37:42 compute-0 sudo[76773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:42 compute-0 sudo[76773]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:42 compute-0 sshd-session[76798]: Accepted publickey for ceph-admin from 192.168.122.100 port 43852 ssh2: RSA SHA256:u+oi91Se3Z6qNLfJgM2if+islPXdtJdild13071S1x0
Nov 26 12:37:42 compute-0 systemd-logind[777]: New session 28 of user ceph-admin.
Nov 26 12:37:42 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Nov 26 12:37:42 compute-0 sshd-session[76798]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 12:37:42 compute-0 sudo[76802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:42 compute-0 sudo[76802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:42 compute-0 sudo[76802]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:42 compute-0 sudo[76827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:37:42 compute-0 sudo[76827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:42 compute-0 sudo[76827]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:42 compute-0 sshd-session[76852]: Accepted publickey for ceph-admin from 192.168.122.100 port 43860 ssh2: RSA SHA256:u+oi91Se3Z6qNLfJgM2if+islPXdtJdild13071S1x0
Nov 26 12:37:42 compute-0 systemd-logind[777]: New session 29 of user ceph-admin.
Nov 26 12:37:42 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Nov 26 12:37:42 compute-0 sshd-session[76852]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 12:37:43 compute-0 sudo[76856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:43 compute-0 sudo[76856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:43 compute-0 sudo[76856]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:43 compute-0 sudo[76881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 26 12:37:43 compute-0 sudo[76881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:43 compute-0 sudo[76881]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:43 compute-0 ceph-mon[74966]: Deploying cephadm binary to compute-0
Nov 26 12:37:43 compute-0 sshd-session[76906]: Accepted publickey for ceph-admin from 192.168.122.100 port 43874 ssh2: RSA SHA256:u+oi91Se3Z6qNLfJgM2if+islPXdtJdild13071S1x0
Nov 26 12:37:43 compute-0 systemd-logind[777]: New session 30 of user ceph-admin.
Nov 26 12:37:43 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Nov 26 12:37:43 compute-0 sshd-session[76906]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 12:37:43 compute-0 sshd-session[76933]: Accepted publickey for ceph-admin from 192.168.122.100 port 43880 ssh2: RSA SHA256:u+oi91Se3Z6qNLfJgM2if+islPXdtJdild13071S1x0
Nov 26 12:37:43 compute-0 systemd-logind[777]: New session 31 of user ceph-admin.
Nov 26 12:37:43 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Nov 26 12:37:43 compute-0 sshd-session[76933]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 12:37:43 compute-0 sudo[76937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:43 compute-0 sudo[76937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:43 compute-0 sudo[76937]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:43 compute-0 sudo[76962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 26 12:37:43 compute-0 sudo[76962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:43 compute-0 sudo[76962]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:43 compute-0 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 12:37:43 compute-0 sshd-session[76987]: Accepted publickey for ceph-admin from 192.168.122.100 port 43888 ssh2: RSA SHA256:u+oi91Se3Z6qNLfJgM2if+islPXdtJdild13071S1x0
Nov 26 12:37:44 compute-0 systemd-logind[777]: New session 32 of user ceph-admin.
Nov 26 12:37:44 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Nov 26 12:37:44 compute-0 sshd-session[76987]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 26 12:37:44 compute-0 sudo[76991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:44 compute-0 sudo[76991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:44 compute-0 sudo[76991]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:44 compute-0 sudo[77016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 26 12:37:44 compute-0 sudo[77016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:44 compute-0 sudo[77016]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:44 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 12:37:44 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:44 compute-0 ceph-mgr[75236]: [cephadm INFO root] Added host compute-0
Nov 26 12:37:44 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 26 12:37:44 compute-0 amazing_bouman[76427]: Added host 'compute-0' with addr '192.168.122.100'
Nov 26 12:37:44 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 12:37:44 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 12:37:44 compute-0 systemd[1]: libpod-237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295.scope: Deactivated successfully.
Nov 26 12:37:44 compute-0 podman[76414]: 2025-11-26 12:37:44.343021603 +0000 UTC m=+4.555051988 container died 237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295 (image=quay.io/ceph/ceph:v18, name=amazing_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 26 12:37:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-87b6d6a6efd90f6912e12787bed0b0cbbf220343cfa176e3b57559cf85179816-merged.mount: Deactivated successfully.
Nov 26 12:37:44 compute-0 podman[76414]: 2025-11-26 12:37:44.37567303 +0000 UTC m=+4.587703415 container remove 237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295 (image=quay.io/ceph/ceph:v18, name=amazing_bouman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 12:37:44 compute-0 sudo[77059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:44 compute-0 systemd[1]: libpod-conmon-237ccb0a258dd72a29e08e147d0c20ccd1f47d42fea2cf90b1f9185d7e9f7295.scope: Deactivated successfully.
Nov 26 12:37:44 compute-0 sudo[77059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:44 compute-0 sudo[77059]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:44 compute-0 podman[77092]: 2025-11-26 12:37:44.422416362 +0000 UTC m=+0.029061072 container create c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f (image=quay.io/ceph/ceph:v18, name=gifted_lamport, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 12:37:44 compute-0 sudo[77095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:44 compute-0 sudo[77095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:44 compute-0 sudo[77095]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:44 compute-0 systemd[1]: Started libpod-conmon-c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f.scope.
Nov 26 12:37:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7485fe15a22fdb224e71c38104ed56d557fc469f722275480583c19b6f510a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7485fe15a22fdb224e71c38104ed56d557fc469f722275480583c19b6f510a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7485fe15a22fdb224e71c38104ed56d557fc469f722275480583c19b6f510a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:44 compute-0 podman[77092]: 2025-11-26 12:37:44.473028522 +0000 UTC m=+0.079673254 container init c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f (image=quay.io/ceph/ceph:v18, name=gifted_lamport, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:37:44 compute-0 podman[77092]: 2025-11-26 12:37:44.478877845 +0000 UTC m=+0.085522556 container start c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f (image=quay.io/ceph/ceph:v18, name=gifted_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:44 compute-0 podman[77092]: 2025-11-26 12:37:44.480753959 +0000 UTC m=+0.087398680 container attach c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f (image=quay.io/ceph/ceph:v18, name=gifted_lamport, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 12:37:44 compute-0 sudo[77131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:44 compute-0 sudo[77131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:44 compute-0 sudo[77131]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:44 compute-0 podman[77092]: 2025-11-26 12:37:44.411553832 +0000 UTC m=+0.018198563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:44 compute-0 sudo[77161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Nov 26 12:37:44 compute-0 sudo[77161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:44 compute-0 podman[77206]: 2025-11-26 12:37:44.706726873 +0000 UTC m=+0.028399315 container create fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9 (image=quay.io/ceph/ceph:v18, name=keen_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:37:44 compute-0 systemd[1]: Started libpod-conmon-fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9.scope.
Nov 26 12:37:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:44 compute-0 podman[77206]: 2025-11-26 12:37:44.757215212 +0000 UTC m=+0.078887664 container init fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9 (image=quay.io/ceph/ceph:v18, name=keen_jones, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 12:37:44 compute-0 podman[77206]: 2025-11-26 12:37:44.761530834 +0000 UTC m=+0.083203266 container start fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9 (image=quay.io/ceph/ceph:v18, name=keen_jones, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 12:37:44 compute-0 podman[77206]: 2025-11-26 12:37:44.762954366 +0000 UTC m=+0.084626798 container attach fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9 (image=quay.io/ceph/ceph:v18, name=keen_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 12:37:44 compute-0 podman[77206]: 2025-11-26 12:37:44.695029101 +0000 UTC m=+0.016701553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:44 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:44 compute-0 ceph-mgr[75236]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 26 12:37:44 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 26 12:37:44 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 12:37:44 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:44 compute-0 gifted_lamport[77134]: Scheduled mon update...
Nov 26 12:37:44 compute-0 systemd[1]: libpod-c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f.scope: Deactivated successfully.
Nov 26 12:37:44 compute-0 podman[77092]: 2025-11-26 12:37:44.92931558 +0000 UTC m=+0.535960311 container died c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f (image=quay.io/ceph/ceph:v18, name=gifted_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 12:37:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e7485fe15a22fdb224e71c38104ed56d557fc469f722275480583c19b6f510a-merged.mount: Deactivated successfully.
Nov 26 12:37:44 compute-0 podman[77092]: 2025-11-26 12:37:44.952510728 +0000 UTC m=+0.559155428 container remove c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f (image=quay.io/ceph/ceph:v18, name=gifted_lamport, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 12:37:44 compute-0 systemd[1]: libpod-conmon-c594f485238837d6178eff0f09e0b9aec83d1badd79b41bcf93efc3ba1823d3f.scope: Deactivated successfully.
Nov 26 12:37:44 compute-0 podman[77255]: 2025-11-26 12:37:44.997281341 +0000 UTC m=+0.031545664 container create 6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4 (image=quay.io/ceph/ceph:v18, name=inspiring_albattani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:45 compute-0 keen_jones[77229]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 26 12:37:45 compute-0 podman[77206]: 2025-11-26 12:37:45.019137025 +0000 UTC m=+0.340809467 container died fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9 (image=quay.io/ceph/ceph:v18, name=keen_jones, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:45 compute-0 systemd[1]: Started libpod-conmon-6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4.scope.
Nov 26 12:37:45 compute-0 systemd[1]: libpod-fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9.scope: Deactivated successfully.
Nov 26 12:37:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d69e9e036d214a598ef9d795df0a838b13b207be2363554f7f5a48a46159359/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d69e9e036d214a598ef9d795df0a838b13b207be2363554f7f5a48a46159359/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d69e9e036d214a598ef9d795df0a838b13b207be2363554f7f5a48a46159359/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:45 compute-0 podman[77255]: 2025-11-26 12:37:45.056338266 +0000 UTC m=+0.090602599 container init 6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4 (image=quay.io/ceph/ceph:v18, name=inspiring_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:37:45 compute-0 podman[77255]: 2025-11-26 12:37:45.060715624 +0000 UTC m=+0.094979946 container start 6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4 (image=quay.io/ceph/ceph:v18, name=inspiring_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:37:45 compute-0 podman[77206]: 2025-11-26 12:37:45.061652699 +0000 UTC m=+0.383325132 container remove fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9 (image=quay.io/ceph/ceph:v18, name=keen_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:45 compute-0 podman[77255]: 2025-11-26 12:37:45.065206416 +0000 UTC m=+0.099470759 container attach 6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4 (image=quay.io/ceph/ceph:v18, name=inspiring_albattani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 12:37:45 compute-0 systemd[1]: libpod-conmon-fa04181f1d93e1c0c8ee0a7b9e28db903eb2f9848b36ef722b5165079e08ddd9.scope: Deactivated successfully.
Nov 26 12:37:45 compute-0 podman[77255]: 2025-11-26 12:37:44.983199096 +0000 UTC m=+0.017463419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:45 compute-0 sudo[77161]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 26 12:37:45 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:45 compute-0 sudo[77284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:45 compute-0 sudo[77284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:45 compute-0 sudo[77284]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:45 compute-0 sudo[77309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:45 compute-0 sudo[77309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:45 compute-0 sudo[77309]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:45 compute-0 sudo[77334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:45 compute-0 sudo[77334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:45 compute-0 sudo[77334]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:45 compute-0 sudo[77359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 26 12:37:45 compute-0 sudo[77359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:45 compute-0 ceph-mon[74966]: Added host compute-0
Nov 26 12:37:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 12:37:45 compute-0 ceph-mon[74966]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:45 compute-0 ceph-mon[74966]: Saving service mon spec with placement count:5
Nov 26 12:37:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5358e874143d7d19fd0dd7cbfce8f0afdccf51b15b61c1bdcefe3061eba472cf-merged.mount: Deactivated successfully.
Nov 26 12:37:45 compute-0 sudo[77359]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:37:45 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:45 compute-0 sudo[77421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:45 compute-0 sudo[77421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:45 compute-0 sudo[77421]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:45 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:45 compute-0 ceph-mgr[75236]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 26 12:37:45 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 26 12:37:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 12:37:45 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:45 compute-0 inspiring_albattani[77269]: Scheduled mgr update...
Nov 26 12:37:45 compute-0 systemd[1]: libpod-6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4.scope: Deactivated successfully.
Nov 26 12:37:45 compute-0 podman[77255]: 2025-11-26 12:37:45.519892697 +0000 UTC m=+0.554157021 container died 6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4 (image=quay.io/ceph/ceph:v18, name=inspiring_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:45 compute-0 sudo[77446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:45 compute-0 sudo[77446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d69e9e036d214a598ef9d795df0a838b13b207be2363554f7f5a48a46159359-merged.mount: Deactivated successfully.
Nov 26 12:37:45 compute-0 sudo[77446]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:45 compute-0 podman[77255]: 2025-11-26 12:37:45.546586698 +0000 UTC m=+0.580851021 container remove 6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4 (image=quay.io/ceph/ceph:v18, name=inspiring_albattani, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 12:37:45 compute-0 systemd[1]: libpod-conmon-6703e6f6cfdbba15d23dd9221bd68e4d9b88778c680a3bba035d6a0494939fc4.scope: Deactivated successfully.
Nov 26 12:37:45 compute-0 sudo[77483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:45 compute-0 podman[77486]: 2025-11-26 12:37:45.592001696 +0000 UTC m=+0.029572616 container create 33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00 (image=quay.io/ceph/ceph:v18, name=nostalgic_zhukovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 12:37:45 compute-0 sudo[77483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:45 compute-0 sudo[77483]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:45 compute-0 systemd[1]: Started libpod-conmon-33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00.scope.
Nov 26 12:37:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d6b5b21f466fb53038e27dde1b2330601f01b38b560a70dc85f9349b62763f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d6b5b21f466fb53038e27dde1b2330601f01b38b560a70dc85f9349b62763f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d6b5b21f466fb53038e27dde1b2330601f01b38b560a70dc85f9349b62763f8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:45 compute-0 podman[77486]: 2025-11-26 12:37:45.638361324 +0000 UTC m=+0.075932254 container init 33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00 (image=quay.io/ceph/ceph:v18, name=nostalgic_zhukovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 12:37:45 compute-0 podman[77486]: 2025-11-26 12:37:45.643242301 +0000 UTC m=+0.080813221 container start 33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00 (image=quay.io/ceph/ceph:v18, name=nostalgic_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:45 compute-0 podman[77486]: 2025-11-26 12:37:45.64458936 +0000 UTC m=+0.082160301 container attach 33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00 (image=quay.io/ceph/ceph:v18, name=nostalgic_zhukovsky, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 12:37:45 compute-0 sudo[77521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 12:37:45 compute-0 sudo[77521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:45 compute-0 podman[77486]: 2025-11-26 12:37:45.580023715 +0000 UTC m=+0.017594655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:45 compute-0 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 12:37:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054713 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:37:45 compute-0 podman[77624]: 2025-11-26 12:37:45.979829482 +0000 UTC m=+0.036720647 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 12:37:46 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:46 compute-0 ceph-mgr[75236]: [cephadm INFO root] Saving service crash spec with placement *
Nov 26 12:37:46 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 26 12:37:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 26 12:37:46 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:46 compute-0 nostalgic_zhukovsky[77525]: Scheduled crash update...
Nov 26 12:37:46 compute-0 systemd[1]: libpod-33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00.scope: Deactivated successfully.
Nov 26 12:37:46 compute-0 conmon[77525]: conmon 33bdf644eb8edf00cf62 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00.scope/container/memory.events
Nov 26 12:37:46 compute-0 podman[77486]: 2025-11-26 12:37:46.101523626 +0000 UTC m=+0.539094547 container died 33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00 (image=quay.io/ceph/ceph:v18, name=nostalgic_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d6b5b21f466fb53038e27dde1b2330601f01b38b560a70dc85f9349b62763f8-merged.mount: Deactivated successfully.
Nov 26 12:37:46 compute-0 podman[77486]: 2025-11-26 12:37:46.125214298 +0000 UTC m=+0.562785219 container remove 33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00 (image=quay.io/ceph/ceph:v18, name=nostalgic_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 12:37:46 compute-0 systemd[1]: libpod-conmon-33bdf644eb8edf00cf62b52996cbd761a470b3ac3530a272f0484c26e1759c00.scope: Deactivated successfully.
Nov 26 12:37:46 compute-0 podman[77653]: 2025-11-26 12:37:46.168613197 +0000 UTC m=+0.028424062 container create 3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5 (image=quay.io/ceph/ceph:v18, name=nervous_carver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:37:46 compute-0 systemd[1]: Started libpod-conmon-3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5.scope.
Nov 26 12:37:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e180a1db322013b4b6cefa8462abddd3fb562182f88e1a1f8a5a4240a529f19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e180a1db322013b4b6cefa8462abddd3fb562182f88e1a1f8a5a4240a529f19/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e180a1db322013b4b6cefa8462abddd3fb562182f88e1a1f8a5a4240a529f19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:46 compute-0 podman[77653]: 2025-11-26 12:37:46.221649697 +0000 UTC m=+0.081460572 container init 3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5 (image=quay.io/ceph/ceph:v18, name=nervous_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 12:37:46 compute-0 podman[77653]: 2025-11-26 12:37:46.22741454 +0000 UTC m=+0.087225404 container start 3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5 (image=quay.io/ceph/ceph:v18, name=nervous_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 12:37:46 compute-0 podman[77653]: 2025-11-26 12:37:46.228875643 +0000 UTC m=+0.088686508 container attach 3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5 (image=quay.io/ceph/ceph:v18, name=nervous_carver, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:37:46 compute-0 podman[77653]: 2025-11-26 12:37:46.157423191 +0000 UTC m=+0.017234076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:46 compute-0 podman[77671]: 2025-11-26 12:37:46.293856249 +0000 UTC m=+0.048739924 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:46 compute-0 podman[77624]: 2025-11-26 12:37:46.297177358 +0000 UTC m=+0.354068513 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:37:46 compute-0 sudo[77521]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:37:46 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:46 compute-0 sudo[77697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:46 compute-0 sudo[77697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:46 compute-0 sudo[77697]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:46 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:46 compute-0 ceph-mon[74966]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:46 compute-0 ceph-mon[74966]: Saving service mgr spec with placement count:2
Nov 26 12:37:46 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:46 compute-0 ceph-mon[74966]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:46 compute-0 ceph-mon[74966]: Saving service crash spec with placement *
Nov 26 12:37:46 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:46 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:46 compute-0 sudo[77722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:46 compute-0 sudo[77722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:46 compute-0 sudo[77722]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:46 compute-0 sudo[77747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:46 compute-0 sudo[77747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:46 compute-0 sudo[77747]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:46 compute-0 sudo[77791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:37:46 compute-0 sudo[77791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 26 12:37:46 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1109739817' entity='client.admin' 
Nov 26 12:37:46 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77827 (sysctl)
Nov 26 12:37:46 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 26 12:37:46 compute-0 systemd[1]: libpod-3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5.scope: Deactivated successfully.
Nov 26 12:37:46 compute-0 podman[77653]: 2025-11-26 12:37:46.68731712 +0000 UTC m=+0.547127995 container died 3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5 (image=quay.io/ceph/ceph:v18, name=nervous_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:37:46 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 26 12:37:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e180a1db322013b4b6cefa8462abddd3fb562182f88e1a1f8a5a4240a529f19-merged.mount: Deactivated successfully.
Nov 26 12:37:46 compute-0 podman[77653]: 2025-11-26 12:37:46.721441382 +0000 UTC m=+0.581252237 container remove 3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5 (image=quay.io/ceph/ceph:v18, name=nervous_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 26 12:37:46 compute-0 systemd[1]: libpod-conmon-3e6d124e465d8a34ea21126be019b830dc33aad72ddc51070fc923fdc9aafac5.scope: Deactivated successfully.
Nov 26 12:37:46 compute-0 podman[77843]: 2025-11-26 12:37:46.763095534 +0000 UTC m=+0.026280884 container create 001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186 (image=quay.io/ceph/ceph:v18, name=wizardly_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:46 compute-0 systemd[1]: Started libpod-conmon-001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186.scope.
Nov 26 12:37:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5acd65f0f483fb5f2a4ebe9409404ba8f233797b8ff3bf17ac656049daef1c4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5acd65f0f483fb5f2a4ebe9409404ba8f233797b8ff3bf17ac656049daef1c4e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5acd65f0f483fb5f2a4ebe9409404ba8f233797b8ff3bf17ac656049daef1c4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:46 compute-0 podman[77843]: 2025-11-26 12:37:46.824708304 +0000 UTC m=+0.087893674 container init 001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186 (image=quay.io/ceph/ceph:v18, name=wizardly_gauss, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:46 compute-0 podman[77843]: 2025-11-26 12:37:46.829307962 +0000 UTC m=+0.092493311 container start 001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186 (image=quay.io/ceph/ceph:v18, name=wizardly_gauss, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:46 compute-0 podman[77843]: 2025-11-26 12:37:46.830519525 +0000 UTC m=+0.093704874 container attach 001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186 (image=quay.io/ceph/ceph:v18, name=wizardly_gauss, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:46 compute-0 podman[77843]: 2025-11-26 12:37:46.752409086 +0000 UTC m=+0.015594457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:46 compute-0 sudo[77791]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:46 compute-0 sudo[77878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:46 compute-0 sudo[77878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:46 compute-0 sudo[77878]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:47 compute-0 sudo[77903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:47 compute-0 sudo[77903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:47 compute-0 sudo[77903]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:47 compute-0 sudo[77928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:47 compute-0 sudo[77928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:47 compute-0 sudo[77928]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:47 compute-0 sudo[77953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 26 12:37:47 compute-0 sudo[77953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:47 compute-0 sudo[77953]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:47 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:37:47 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:47 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:47 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 26 12:37:47 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:47 compute-0 systemd[1]: libpod-001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186.scope: Deactivated successfully.
Nov 26 12:37:47 compute-0 podman[77843]: 2025-11-26 12:37:47.28607892 +0000 UTC m=+0.549264270 container died 001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186 (image=quay.io/ceph/ceph:v18, name=wizardly_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:47 compute-0 sudo[78013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:47 compute-0 sudo[78013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:47 compute-0 sudo[78013]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5acd65f0f483fb5f2a4ebe9409404ba8f233797b8ff3bf17ac656049daef1c4e-merged.mount: Deactivated successfully.
Nov 26 12:37:47 compute-0 podman[77843]: 2025-11-26 12:37:47.311395455 +0000 UTC m=+0.574580805 container remove 001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186 (image=quay.io/ceph/ceph:v18, name=wizardly_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 12:37:47 compute-0 systemd[1]: libpod-conmon-001c82cd8629f2d99d30801990495825b473212ed04445438075c6ef04506186.scope: Deactivated successfully.
Nov 26 12:37:47 compute-0 sudo[78048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:47 compute-0 sudo[78048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:47 compute-0 sudo[78048]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:47 compute-0 podman[78058]: 2025-11-26 12:37:47.361639553 +0000 UTC m=+0.031334297 container create 3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac (image=quay.io/ceph/ceph:v18, name=nifty_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:37:47 compute-0 systemd[1]: Started libpod-conmon-3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac.scope.
Nov 26 12:37:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:47 compute-0 sudo[78085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:47 compute-0 sudo[78085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aaac686691e8dba51cd4187218ddf3a4b6c50634e43ce14c0fe4ca529a0d50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aaac686691e8dba51cd4187218ddf3a4b6c50634e43ce14c0fe4ca529a0d50/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aaac686691e8dba51cd4187218ddf3a4b6c50634e43ce14c0fe4ca529a0d50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:47 compute-0 sudo[78085]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:47 compute-0 podman[78058]: 2025-11-26 12:37:47.418838596 +0000 UTC m=+0.088533350 container init 3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac (image=quay.io/ceph/ceph:v18, name=nifty_gagarin, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:47 compute-0 podman[78058]: 2025-11-26 12:37:47.424698509 +0000 UTC m=+0.094393242 container start 3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac (image=quay.io/ceph/ceph:v18, name=nifty_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 26 12:37:47 compute-0 podman[78058]: 2025-11-26 12:37:47.426051187 +0000 UTC m=+0.095745941 container attach 3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac (image=quay.io/ceph/ceph:v18, name=nifty_gagarin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 12:37:47 compute-0 podman[78058]: 2025-11-26 12:37:47.347540205 +0000 UTC m=+0.017234958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:47 compute-0 sudo[78116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- inventory --format=json-pretty --filter-for-batch
Nov 26 12:37:47 compute-0 sudo[78116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:47 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1109739817' entity='client.admin' 
Nov 26 12:37:47 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:47 compute-0 ceph-mon[74966]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:47 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:47 compute-0 podman[78174]: 2025-11-26 12:37:47.676459466 +0000 UTC m=+0.028030341 container create 51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:47 compute-0 systemd[1]: Started libpod-conmon-51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e.scope.
Nov 26 12:37:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:47 compute-0 podman[78174]: 2025-11-26 12:37:47.729846556 +0000 UTC m=+0.081417441 container init 51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:37:47 compute-0 podman[78174]: 2025-11-26 12:37:47.734331898 +0000 UTC m=+0.085902772 container start 51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 12:37:47 compute-0 podman[78174]: 2025-11-26 12:37:47.735471606 +0000 UTC m=+0.087042480 container attach 51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 12:37:47 compute-0 ecstatic_varahamihira[78206]: 167 167
Nov 26 12:37:47 compute-0 systemd[1]: libpod-51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e.scope: Deactivated successfully.
Nov 26 12:37:47 compute-0 conmon[78206]: conmon 51a6b1c9f0661a1eeca3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e.scope/container/memory.events
Nov 26 12:37:47 compute-0 podman[78174]: 2025-11-26 12:37:47.663370802 +0000 UTC m=+0.014941696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:37:47 compute-0 podman[78211]: 2025-11-26 12:37:47.769421911 +0000 UTC m=+0.018028543 container died 51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 12:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2e0706f8b4486625f92b60e4b452e6eaa53d1e01f65b4538d3a5c28d1fbde99-merged.mount: Deactivated successfully.
Nov 26 12:37:47 compute-0 podman[78211]: 2025-11-26 12:37:47.787651852 +0000 UTC m=+0.036258485 container remove 51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 12:37:47 compute-0 systemd[1]: libpod-conmon-51a6b1c9f0661a1eeca308ecafbe56b99bd02aba083158da94e3471d83a5631e.scope: Deactivated successfully.
Nov 26 12:37:47 compute-0 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 12:37:47 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:47 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 12:37:47 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:47 compute-0 ceph-mgr[75236]: [cephadm INFO root] Added label _admin to host compute-0
Nov 26 12:37:47 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 26 12:37:47 compute-0 nifty_gagarin[78111]: Added label _admin to host compute-0
Nov 26 12:37:47 compute-0 systemd[1]: libpod-3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac.scope: Deactivated successfully.
Nov 26 12:37:47 compute-0 conmon[78111]: conmon 3a6406738e894a68fd3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac.scope/container/memory.events
Nov 26 12:37:47 compute-0 podman[78224]: 2025-11-26 12:37:47.907001589 +0000 UTC m=+0.016387071 container died 3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac (image=quay.io/ceph/ceph:v18, name=nifty_gagarin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-48aaac686691e8dba51cd4187218ddf3a4b6c50634e43ce14c0fe4ca529a0d50-merged.mount: Deactivated successfully.
Nov 26 12:37:47 compute-0 podman[78224]: 2025-11-26 12:37:47.927407421 +0000 UTC m=+0.036792882 container remove 3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac (image=quay.io/ceph/ceph:v18, name=nifty_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:47 compute-0 systemd[1]: libpod-conmon-3a6406738e894a68fd3c2e19dff34258e50ede7b43e3ad6092155e37893a31ac.scope: Deactivated successfully.
Nov 26 12:37:47 compute-0 podman[78236]: 2025-11-26 12:37:47.971207997 +0000 UTC m=+0.026399116 container create 0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303 (image=quay.io/ceph/ceph:v18, name=great_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 12:37:47 compute-0 systemd[1]: Started libpod-conmon-0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303.scope.
Nov 26 12:37:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d222ebd8436fb18b10f1c37756d946ad217ed1314e073f7932cbe54dbdada8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d222ebd8436fb18b10f1c37756d946ad217ed1314e073f7932cbe54dbdada8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d222ebd8436fb18b10f1c37756d946ad217ed1314e073f7932cbe54dbdada8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:48 compute-0 podman[78236]: 2025-11-26 12:37:48.024343002 +0000 UTC m=+0.079534141 container init 0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303 (image=quay.io/ceph/ceph:v18, name=great_swanson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 12:37:48 compute-0 podman[78236]: 2025-11-26 12:37:48.028243612 +0000 UTC m=+0.083434731 container start 0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303 (image=quay.io/ceph/ceph:v18, name=great_swanson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:37:48 compute-0 podman[78236]: 2025-11-26 12:37:48.02959022 +0000 UTC m=+0.084781339 container attach 0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303 (image=quay.io/ceph/ceph:v18, name=great_swanson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 12:37:48 compute-0 podman[78236]: 2025-11-26 12:37:47.960615487 +0000 UTC m=+0.015806626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:48 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 26 12:37:48 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/124655842' entity='client.admin' 
Nov 26 12:37:48 compute-0 systemd[1]: libpod-0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303.scope: Deactivated successfully.
Nov 26 12:37:48 compute-0 podman[78275]: 2025-11-26 12:37:48.484194075 +0000 UTC m=+0.014703176 container died 0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303 (image=quay.io/ceph/ceph:v18, name=great_swanson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 12:37:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-18d222ebd8436fb18b10f1c37756d946ad217ed1314e073f7932cbe54dbdada8-merged.mount: Deactivated successfully.
Nov 26 12:37:48 compute-0 podman[78275]: 2025-11-26 12:37:48.502485813 +0000 UTC m=+0.032994914 container remove 0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303 (image=quay.io/ceph/ceph:v18, name=great_swanson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 12:37:48 compute-0 systemd[1]: libpod-conmon-0aa40e6d9c56811a5b552dd82b2bed50a0854f4ddb84c01d1e95f2728b9f9303.scope: Deactivated successfully.
Nov 26 12:37:48 compute-0 podman[78287]: 2025-11-26 12:37:48.541447611 +0000 UTC m=+0.024064356 container create 8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7 (image=quay.io/ceph/ceph:v18, name=agitated_proskuriakova, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 12:37:48 compute-0 systemd[1]: Started libpod-conmon-8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7.scope.
Nov 26 12:37:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f27538cc368d469a2f6036539083932e4da397062ef31c144a6f0d700ef65e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f27538cc368d469a2f6036539083932e4da397062ef31c144a6f0d700ef65e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f27538cc368d469a2f6036539083932e4da397062ef31c144a6f0d700ef65e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:48 compute-0 podman[78287]: 2025-11-26 12:37:48.591186516 +0000 UTC m=+0.073803252 container init 8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7 (image=quay.io/ceph/ceph:v18, name=agitated_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:37:48 compute-0 podman[78287]: 2025-11-26 12:37:48.595637394 +0000 UTC m=+0.078254128 container start 8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7 (image=quay.io/ceph/ceph:v18, name=agitated_proskuriakova, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 12:37:48 compute-0 podman[78287]: 2025-11-26 12:37:48.596722828 +0000 UTC m=+0.079339564 container attach 8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7 (image=quay.io/ceph/ceph:v18, name=agitated_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 12:37:48 compute-0 podman[78287]: 2025-11-26 12:37:48.53141169 +0000 UTC m=+0.014028445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:48 compute-0 ceph-mon[74966]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:48 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:48 compute-0 ceph-mon[74966]: Added label _admin to host compute-0
Nov 26 12:37:48 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/124655842' entity='client.admin' 
Nov 26 12:37:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 26 12:37:49 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3505617280' entity='client.admin' 
Nov 26 12:37:49 compute-0 agitated_proskuriakova[78301]: set mgr/dashboard/cluster/status
Nov 26 12:37:49 compute-0 systemd[1]: libpod-8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7.scope: Deactivated successfully.
Nov 26 12:37:49 compute-0 podman[78287]: 2025-11-26 12:37:49.099656855 +0000 UTC m=+0.582273590 container died 8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7 (image=quay.io/ceph/ceph:v18, name=agitated_proskuriakova, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 12:37:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-66f27538cc368d469a2f6036539083932e4da397062ef31c144a6f0d700ef65e-merged.mount: Deactivated successfully.
Nov 26 12:37:49 compute-0 podman[78287]: 2025-11-26 12:37:49.118848369 +0000 UTC m=+0.601465114 container remove 8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7 (image=quay.io/ceph/ceph:v18, name=agitated_proskuriakova, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:49 compute-0 systemd[1]: libpod-conmon-8175e7784b1ae5cd14a9272291c6d24deb5b5b0d6c3f006cb13f654d4ed304e7.scope: Deactivated successfully.
Nov 26 12:37:49 compute-0 sudo[74067]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:49 compute-0 podman[78344]: 2025-11-26 12:37:49.250483587 +0000 UTC m=+0.024502703 container create b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:49 compute-0 systemd[1]: Started libpod-conmon-b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9.scope.
Nov 26 12:37:49 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ea6547ce3870fe2df3579c8bf259138f57e47e379e96209844734fe489dda21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ea6547ce3870fe2df3579c8bf259138f57e47e379e96209844734fe489dda21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ea6547ce3870fe2df3579c8bf259138f57e47e379e96209844734fe489dda21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ea6547ce3870fe2df3579c8bf259138f57e47e379e96209844734fe489dda21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:49 compute-0 podman[78344]: 2025-11-26 12:37:49.301706519 +0000 UTC m=+0.075725624 container init b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:49 compute-0 podman[78344]: 2025-11-26 12:37:49.306832338 +0000 UTC m=+0.080851444 container start b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:49 compute-0 podman[78344]: 2025-11-26 12:37:49.307912804 +0000 UTC m=+0.081931910 container attach b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:49 compute-0 podman[78344]: 2025-11-26 12:37:49.24059314 +0000 UTC m=+0.014612266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:37:49 compute-0 sudo[78385]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fomscycqhluuqmlaguujrbgoxxcrtleg ; /usr/bin/python3'
Nov 26 12:37:49 compute-0 sudo[78385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:37:49 compute-0 python3[78387]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:37:49 compute-0 podman[78388]: 2025-11-26 12:37:49.525367376 +0000 UTC m=+0.027443383 container create 7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277 (image=quay.io/ceph/ceph:v18, name=wonderful_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 26 12:37:49 compute-0 systemd[1]: Started libpod-conmon-7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277.scope.
Nov 26 12:37:49 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be82d35f941cc99546b1e3db3cd581d08883085076e8260497be768cd336e97e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be82d35f941cc99546b1e3db3cd581d08883085076e8260497be768cd336e97e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:49 compute-0 podman[78388]: 2025-11-26 12:37:49.582194789 +0000 UTC m=+0.084270797 container init 7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277 (image=quay.io/ceph/ceph:v18, name=wonderful_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 12:37:49 compute-0 podman[78388]: 2025-11-26 12:37:49.587158562 +0000 UTC m=+0.089234570 container start 7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277 (image=quay.io/ceph/ceph:v18, name=wonderful_kalam, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 12:37:49 compute-0 podman[78388]: 2025-11-26 12:37:49.588422865 +0000 UTC m=+0.090498873 container attach 7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277 (image=quay.io/ceph/ceph:v18, name=wonderful_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:49 compute-0 podman[78388]: 2025-11-26 12:37:49.514152323 +0000 UTC m=+0.016228351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:49 compute-0 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 12:37:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 26 12:37:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1980961670' entity='client.admin' 
Nov 26 12:37:50 compute-0 systemd[1]: libpod-7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277.scope: Deactivated successfully.
Nov 26 12:37:50 compute-0 podman[78438]: 2025-11-26 12:37:50.057951814 +0000 UTC m=+0.018320144 container died 7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277 (image=quay.io/ceph/ceph:v18, name=wonderful_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 12:37:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-be82d35f941cc99546b1e3db3cd581d08883085076e8260497be768cd336e97e-merged.mount: Deactivated successfully.
Nov 26 12:37:50 compute-0 podman[78438]: 2025-11-26 12:37:50.083286655 +0000 UTC m=+0.043654983 container remove 7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277 (image=quay.io/ceph/ceph:v18, name=wonderful_kalam, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 12:37:50 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3505617280' entity='client.admin' 
Nov 26 12:37:50 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1980961670' entity='client.admin' 
Nov 26 12:37:50 compute-0 systemd[1]: libpod-conmon-7e59257c90074ebd5fab92b5517f8f0570ff01434c833ecd754ecc0540b98277.scope: Deactivated successfully.
Nov 26 12:37:50 compute-0 sudo[78385]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 strange_tu[78357]: [
Nov 26 12:37:50 compute-0 strange_tu[78357]:     {
Nov 26 12:37:50 compute-0 strange_tu[78357]:         "available": false,
Nov 26 12:37:50 compute-0 strange_tu[78357]:         "ceph_device": false,
Nov 26 12:37:50 compute-0 strange_tu[78357]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 26 12:37:50 compute-0 strange_tu[78357]:         "lsm_data": {},
Nov 26 12:37:50 compute-0 strange_tu[78357]:         "lvs": [],
Nov 26 12:37:50 compute-0 strange_tu[78357]:         "path": "/dev/sr0",
Nov 26 12:37:50 compute-0 strange_tu[78357]:         "rejected_reasons": [
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "Has a FileSystem",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "Insufficient space (<5GB)"
Nov 26 12:37:50 compute-0 strange_tu[78357]:         ],
Nov 26 12:37:50 compute-0 strange_tu[78357]:         "sys_api": {
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "actuators": null,
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "device_nodes": "sr0",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "devname": "sr0",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "human_readable_size": "474.00 KB",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "id_bus": "ata",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "model": "QEMU DVD-ROM",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "nr_requests": "64",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "parent": "/dev/sr0",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "partitions": {},
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "path": "/dev/sr0",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "removable": "1",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "rev": "2.5+",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "ro": "0",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "rotational": "1",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "sas_address": "",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "sas_device_handle": "",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "scheduler_mode": "mq-deadline",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "sectors": 0,
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "sectorsize": "2048",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "size": 485376.0,
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "support_discard": "2048",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "type": "disk",
Nov 26 12:37:50 compute-0 strange_tu[78357]:             "vendor": "QEMU"
Nov 26 12:37:50 compute-0 strange_tu[78357]:         }
Nov 26 12:37:50 compute-0 strange_tu[78357]:     }
Nov 26 12:37:50 compute-0 strange_tu[78357]: ]
Nov 26 12:37:50 compute-0 systemd[1]: libpod-b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9.scope: Deactivated successfully.
Nov 26 12:37:50 compute-0 systemd[1]: libpod-b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9.scope: Consumed 1.001s CPU time.
Nov 26 12:37:50 compute-0 podman[78344]: 2025-11-26 12:37:50.328920877 +0000 UTC m=+1.102939982 container died b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 12:37:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ea6547ce3870fe2df3579c8bf259138f57e47e379e96209844734fe489dda21-merged.mount: Deactivated successfully.
Nov 26 12:37:50 compute-0 podman[78344]: 2025-11-26 12:37:50.361186696 +0000 UTC m=+1.135205803 container remove b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:37:50 compute-0 systemd[1]: libpod-conmon-b6c4a88e06b8ad148682a3becda2cb3e1cf0660e7175d7964145e25030fba4d9.scope: Deactivated successfully.
Nov 26 12:37:50 compute-0 sudo[78116]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:37:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:37:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:37:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:37:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 26 12:37:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 12:37:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:37:50 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:37:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:37:50 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 26 12:37:50 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 26 12:37:50 compute-0 sudo[80042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:50 compute-0 sudo[80042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:50 compute-0 sudo[80042]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 sudo[80090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 26 12:37:50 compute-0 sudo[80090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:50 compute-0 sudo[80090]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 sudo[80115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:50 compute-0 sudo[80115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:50 compute-0 sudo[80115]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 sudo[80140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/etc/ceph
Nov 26 12:37:50 compute-0 sudo[80140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:50 compute-0 sudo[80140]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 sudo[80165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:50 compute-0 sudo[80165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:50 compute-0 sudo[80165]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 sudo[80213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/etc/ceph/ceph.conf.new
Nov 26 12:37:50 compute-0 sudo[80213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:50 compute-0 sudo[80213]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 sudo[80262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:50 compute-0 sudo[80262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:50 compute-0 sudo[80262]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 sudo[80311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mujoecffklvgqhpvdxfvzvcbrtylzmww ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764160670.3639514-36930-108437883879878/async_wrapper.py j939461483014 30 /home/zuul/.ansible/tmp/ansible-tmp-1764160670.3639514-36930-108437883879878/AnsiballZ_command.py _'
Nov 26 12:37:50 compute-0 sudo[80311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:37:50 compute-0 sudo[80313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:37:50 compute-0 sudo[80313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:50 compute-0 sudo[80313]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 sudo[80340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:50 compute-0 sudo[80340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:50 compute-0 sudo[80340]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 sudo[80365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/etc/ceph/ceph.conf.new
Nov 26 12:37:50 compute-0 sudo[80365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:50 compute-0 sudo[80365]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 ansible-async_wrapper.py[80319]: Invoked with j939461483014 30 /home/zuul/.ansible/tmp/ansible-tmp-1764160670.3639514-36930-108437883879878/AnsiballZ_command.py _
Nov 26 12:37:50 compute-0 ansible-async_wrapper.py[80400]: Starting module and watcher
Nov 26 12:37:50 compute-0 ansible-async_wrapper.py[80400]: Start watching 80405 (30)
Nov 26 12:37:50 compute-0 ansible-async_wrapper.py[80405]: Start module (80405)
Nov 26 12:37:50 compute-0 ansible-async_wrapper.py[80319]: Return async_wrapper task started.
Nov 26 12:37:50 compute-0 sudo[80311]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 sudo[80418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:50 compute-0 sudo[80418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:50 compute-0 sudo[80418]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:37:50 compute-0 sudo[80443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/etc/ceph/ceph.conf.new
Nov 26 12:37:50 compute-0 sudo[80443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:50 compute-0 sudo[80443]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 python3[80415]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:37:50 compute-0 sudo[80468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:50 compute-0 sudo[80468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:50 compute-0 sudo[80468]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:50 compute-0 podman[80489]: 2025-11-26 12:37:50.992724238 +0000 UTC m=+0.029336692 container create 3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733 (image=quay.io/ceph/ceph:v18, name=sweet_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:51 compute-0 systemd[1]: Started libpod-conmon-3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733.scope.
Nov 26 12:37:51 compute-0 sudo[80502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/etc/ceph/ceph.conf.new
Nov 26 12:37:51 compute-0 sudo[80502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80502]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660053ba9439ce274c44f6b68a32a2a616d430b40adbcf69def7f8a1c813fa95/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660053ba9439ce274c44f6b68a32a2a616d430b40adbcf69def7f8a1c813fa95/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:51 compute-0 podman[80489]: 2025-11-26 12:37:51.053861683 +0000 UTC m=+0.090474167 container init 3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733 (image=quay.io/ceph/ceph:v18, name=sweet_franklin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:51 compute-0 podman[80489]: 2025-11-26 12:37:51.060501574 +0000 UTC m=+0.097114038 container start 3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733 (image=quay.io/ceph/ceph:v18, name=sweet_franklin, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:51 compute-0 podman[80489]: 2025-11-26 12:37:51.061810922 +0000 UTC m=+0.098423396 container attach 3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733 (image=quay.io/ceph/ceph:v18, name=sweet_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 26 12:37:51 compute-0 sudo[80534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:51 compute-0 sudo[80534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80534]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 podman[80489]: 2025-11-26 12:37:50.982016912 +0000 UTC m=+0.018629405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:51 compute-0 sudo[80560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 26 12:37:51 compute-0 sudo[80560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80560]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.conf
Nov 26 12:37:51 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.conf
Nov 26 12:37:51 compute-0 sudo[80585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:51 compute-0 sudo[80585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80585]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 sudo[80610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config
Nov 26 12:37:51 compute-0 sudo[80610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80610]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 sudo[80635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:51 compute-0 sudo[80635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80635]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 sudo[80660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config
Nov 26 12:37:51 compute-0 sudo[80660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80660]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 sudo[80685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:51 compute-0 sudo[80685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80685]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 sudo[80729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.conf.new
Nov 26 12:37:51 compute-0 sudo[80729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80729]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 12:37:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:37:51 compute-0 ceph-mon[74966]: Updating compute-0:/etc/ceph/ceph.conf
Nov 26 12:37:51 compute-0 ceph-mon[74966]: Updating compute-0:/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.conf
Nov 26 12:37:51 compute-0 sudo[80754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:51 compute-0 sudo[80754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80754]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 sudo[80779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:37:51 compute-0 sudo[80779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80779]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 sudo[80804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:51 compute-0 sudo[80804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80804]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 12:37:51 compute-0 sweet_franklin[80530]: 
Nov 26 12:37:51 compute-0 sweet_franklin[80530]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 12:37:51 compute-0 systemd[1]: libpod-3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733.scope: Deactivated successfully.
Nov 26 12:37:51 compute-0 podman[80489]: 2025-11-26 12:37:51.510614568 +0000 UTC m=+0.547227032 container died 3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733 (image=quay.io/ceph/ceph:v18, name=sweet_franklin, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 12:37:51 compute-0 sudo[80829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.conf.new
Nov 26 12:37:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-660053ba9439ce274c44f6b68a32a2a616d430b40adbcf69def7f8a1c813fa95-merged.mount: Deactivated successfully.
Nov 26 12:37:51 compute-0 sudo[80829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80829]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 podman[80489]: 2025-11-26 12:37:51.536209587 +0000 UTC m=+0.572822061 container remove 3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733 (image=quay.io/ceph/ceph:v18, name=sweet_franklin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:51 compute-0 systemd[1]: libpod-conmon-3e5ba29b37ba48a6d6b647a899a00fba6c3913e468f546fabb9479e7374a9733.scope: Deactivated successfully.
Nov 26 12:37:51 compute-0 ansible-async_wrapper.py[80405]: Module complete (80405)
Nov 26 12:37:51 compute-0 sudo[80889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:51 compute-0 sudo[80889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80889]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 sudo[80914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.conf.new
Nov 26 12:37:51 compute-0 sudo[80914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80914]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 sudo[80939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:51 compute-0 sudo[80939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80939]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 sudo[80964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.conf.new
Nov 26 12:37:51 compute-0 sudo[80964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80964]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 sudo[80989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:51 compute-0 sudo[80989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[80989]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 sudo[81014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.conf.new /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.conf
Nov 26 12:37:51 compute-0 sudo[81014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[81014]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 26 12:37:51 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 26 12:37:51 compute-0 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 12:37:51 compute-0 sudo[81039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:51 compute-0 sudo[81039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[81039]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 sudo[81064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 26 12:37:51 compute-0 sudo[81064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[81064]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 sudo[81112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:51 compute-0 sudo[81112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[81112]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:51 compute-0 sudo[81137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/etc/ceph
Nov 26 12:37:51 compute-0 sudo[81137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:51 compute-0 sudo[81137]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:52 compute-0 sudo[81162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81162]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81208]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzcuwhwpvyhtiwtvyeciztydonayubdj ; /usr/bin/python3'
Nov 26 12:37:52 compute-0 sudo[81208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:37:52 compute-0 sudo[81212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/etc/ceph/ceph.client.admin.keyring.new
Nov 26 12:37:52 compute-0 sudo[81212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81212]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:52 compute-0 sudo[81238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81238]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:37:52 compute-0 sudo[81263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81263]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 python3[81213]: ansible-ansible.legacy.async_status Invoked with jid=j939461483014.80319 mode=status _async_dir=/root/.ansible_async
Nov 26 12:37:52 compute-0 sudo[81208]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:52 compute-0 sudo[81288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81288]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/etc/ceph/ceph.client.admin.keyring.new
Nov 26 12:37:52 compute-0 sudo[81333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81333]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvgabycptygsokdfdhbrwojrbtandeqt ; /usr/bin/python3'
Nov 26 12:37:52 compute-0 sudo[81388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:37:52 compute-0 sudo[81410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:52 compute-0 sudo[81410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81410]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/etc/ceph/ceph.client.admin.keyring.new
Nov 26 12:37:52 compute-0 sudo[81435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81435]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 python3[81407]: ansible-ansible.legacy.async_status Invoked with jid=j939461483014.80319 mode=cleanup _async_dir=/root/.ansible_async
Nov 26 12:37:52 compute-0 sudo[81388]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:52 compute-0 sudo[81460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81460]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 ceph-mon[74966]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 12:37:52 compute-0 ceph-mon[74966]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 26 12:37:52 compute-0 sudo[81485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/etc/ceph/ceph.client.admin.keyring.new
Nov 26 12:37:52 compute-0 sudo[81485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81485]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:52 compute-0 sudo[81510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81510]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 26 12:37:52 compute-0 sudo[81535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81535]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.client.admin.keyring
Nov 26 12:37:52 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.client.admin.keyring
Nov 26 12:37:52 compute-0 sudo[81560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:52 compute-0 sudo[81560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81560]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config
Nov 26 12:37:52 compute-0 sudo[81585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81585]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81646]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axtmreoyirwhisovqplrxnvwhojfhqpf ; /usr/bin/python3'
Nov 26 12:37:52 compute-0 sudo[81646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:37:52 compute-0 sudo[81618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:52 compute-0 sudo[81618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81618]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config
Nov 26 12:37:52 compute-0 sudo[81661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81661]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:52 compute-0 sudo[81686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81686]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 python3[81658]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 12:37:52 compute-0 sudo[81711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.client.admin.keyring.new
Nov 26 12:37:52 compute-0 sudo[81711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81711]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81646]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:52 compute-0 sudo[81738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81738]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:37:52 compute-0 sudo[81763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81763]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:52 compute-0 sudo[81788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81788]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.client.admin.keyring.new
Nov 26 12:37:52 compute-0 sudo[81813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81813]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:52 compute-0 sudo[81907]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riqamnqkmiefyvissxiffxaqyidsamlm ; /usr/bin/python3'
Nov 26 12:37:52 compute-0 sudo[81862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:37:52 compute-0 sudo[81862]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:52 compute-0 sudo[81912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.client.admin.keyring.new
Nov 26 12:37:52 compute-0 sudo[81912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:52 compute-0 sudo[81912]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:53 compute-0 sudo[81937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:53 compute-0 sudo[81937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:53 compute-0 sudo[81937]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:53 compute-0 python3[81911]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:37:53 compute-0 sudo[81962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.client.admin.keyring.new
Nov 26 12:37:53 compute-0 sudo[81962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:53 compute-0 sudo[81962]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:53 compute-0 podman[81985]: 2025-11-26 12:37:53.111231429 +0000 UTC m=+0.031232143 container create 1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27 (image=quay.io/ceph/ceph:v18, name=friendly_chandrasekhar, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 12:37:53 compute-0 systemd[1]: Started libpod-conmon-1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27.scope.
Nov 26 12:37:53 compute-0 sudo[81993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:53 compute-0 sudo[81993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:53 compute-0 sudo[81993]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7447f506c7d41e57054d439d5a4224e49b1cfe99569efff723b07cb9a5992995/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7447f506c7d41e57054d439d5a4224e49b1cfe99569efff723b07cb9a5992995/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7447f506c7d41e57054d439d5a4224e49b1cfe99569efff723b07cb9a5992995/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:53 compute-0 podman[81985]: 2025-11-26 12:37:53.164471593 +0000 UTC m=+0.084472316 container init 1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27 (image=quay.io/ceph/ceph:v18, name=friendly_chandrasekhar, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:37:53 compute-0 podman[81985]: 2025-11-26 12:37:53.168699449 +0000 UTC m=+0.088700153 container start 1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27 (image=quay.io/ceph/ceph:v18, name=friendly_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 12:37:53 compute-0 podman[81985]: 2025-11-26 12:37:53.170279898 +0000 UTC m=+0.090280602 container attach 1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27 (image=quay.io/ceph/ceph:v18, name=friendly_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 12:37:53 compute-0 sudo[82028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f7d7fe93-41e5-51c4-b72d-63b38686102e/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.client.admin.keyring.new /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.client.admin.keyring
Nov 26 12:37:53 compute-0 sudo[82028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:53 compute-0 sudo[82028]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:53 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:37:53 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:53 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:37:53 compute-0 podman[81985]: 2025-11-26 12:37:53.097891231 +0000 UTC m=+0.017891966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:53 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:53 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:37:53 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:53 compute-0 ceph-mgr[75236]: [progress INFO root] update: starting ev f99edd48-4aaf-4291-87e3-7332ae42a40e (Updating crash deployment (+1 -> 1))
Nov 26 12:37:53 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 26 12:37:53 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 26 12:37:53 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 26 12:37:53 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:37:53 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:53 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 26 12:37:53 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 26 12:37:53 compute-0 sudo[82054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:53 compute-0 sudo[82054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:53 compute-0 sudo[82054]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:53 compute-0 sudo[82079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:53 compute-0 sudo[82079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:53 compute-0 sudo[82079]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:53 compute-0 sudo[82104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:53 compute-0 sudo[82104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:53 compute-0 sudo[82104]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:53 compute-0 sudo[82129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:37:53 compute-0 sudo[82129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:53 compute-0 podman[82206]: 2025-11-26 12:37:53.594316002 +0000 UTC m=+0.027806538 container create 3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 12:37:53 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 12:37:53 compute-0 friendly_chandrasekhar[82024]: 
Nov 26 12:37:53 compute-0 friendly_chandrasekhar[82024]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 12:37:53 compute-0 systemd[1]: Started libpod-conmon-3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce.scope.
Nov 26 12:37:53 compute-0 podman[81985]: 2025-11-26 12:37:53.624439806 +0000 UTC m=+0.544440509 container died 1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27 (image=quay.io/ceph/ceph:v18, name=friendly_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 12:37:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:53 compute-0 systemd[1]: libpod-1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27.scope: Deactivated successfully.
Nov 26 12:37:53 compute-0 podman[82206]: 2025-11-26 12:37:53.63799015 +0000 UTC m=+0.071480696 container init 3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 12:37:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7447f506c7d41e57054d439d5a4224e49b1cfe99569efff723b07cb9a5992995-merged.mount: Deactivated successfully.
Nov 26 12:37:53 compute-0 podman[82206]: 2025-11-26 12:37:53.646075515 +0000 UTC m=+0.079566052 container start 3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:53 compute-0 eager_newton[82221]: 167 167
Nov 26 12:37:53 compute-0 systemd[1]: libpod-3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce.scope: Deactivated successfully.
Nov 26 12:37:53 compute-0 podman[82206]: 2025-11-26 12:37:53.647663798 +0000 UTC m=+0.081154325 container attach 3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 12:37:53 compute-0 podman[82206]: 2025-11-26 12:37:53.648726941 +0000 UTC m=+0.082217497 container died 3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 12:37:53 compute-0 podman[81985]: 2025-11-26 12:37:53.65755151 +0000 UTC m=+0.577552214 container remove 1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27 (image=quay.io/ceph/ceph:v18, name=friendly_chandrasekhar, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:53 compute-0 systemd[1]: libpod-conmon-1427a43566c4e8351160046301d5541b03039e69683fc7d2dc72fae9f9c36a27.scope: Deactivated successfully.
Nov 26 12:37:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7b373b2296b9038e7e62c75974c8cf544f9eec40cdcc4852a9cf5a1eed59eb5-merged.mount: Deactivated successfully.
Nov 26 12:37:53 compute-0 podman[82206]: 2025-11-26 12:37:53.675528024 +0000 UTC m=+0.109018560 container remove 3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 12:37:53 compute-0 podman[82206]: 2025-11-26 12:37:53.58260281 +0000 UTC m=+0.016093366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:37:53 compute-0 systemd[1]: libpod-conmon-3464f3f248a3996b7cf9cca10925d4d832c9bef3aee069672cf928b295da51ce.scope: Deactivated successfully.
Nov 26 12:37:53 compute-0 sudo[81907]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:53 compute-0 systemd[1]: Reloading.
Nov 26 12:37:53 compute-0 systemd-rc-local-generator[82267]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:37:53 compute-0 systemd-sysv-generator[82271]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:37:53 compute-0 ceph-mgr[75236]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 12:37:53 compute-0 sudo[82306]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ounfpdghkxvwwchtpxzrdlvkthiwdifw ; /usr/bin/python3'
Nov 26 12:37:53 compute-0 sudo[82306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:37:53 compute-0 systemd[1]: Reloading.
Nov 26 12:37:53 compute-0 systemd-rc-local-generator[82333]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:37:53 compute-0 systemd-sysv-generator[82337]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:37:54 compute-0 python3[82310]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:37:54 compute-0 podman[82348]: 2025-11-26 12:37:54.074737026 +0000 UTC m=+0.030544989 container create d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea (image=quay.io/ceph/ceph:v18, name=focused_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:37:54 compute-0 systemd[1]: Started libpod-conmon-d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea.scope.
Nov 26 12:37:54 compute-0 systemd[1]: Starting Ceph crash.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 12:37:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6decc99640cb07cf5b398eacadb07d95916c2d3a333c0fd9a07e6c9e328155d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6decc99640cb07cf5b398eacadb07d95916c2d3a333c0fd9a07e6c9e328155d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6decc99640cb07cf5b398eacadb07d95916c2d3a333c0fd9a07e6c9e328155d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:54 compute-0 podman[82348]: 2025-11-26 12:37:54.152866209 +0000 UTC m=+0.108674162 container init d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea (image=quay.io/ceph/ceph:v18, name=focused_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 12:37:54 compute-0 podman[82348]: 2025-11-26 12:37:54.158384809 +0000 UTC m=+0.114192762 container start d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea (image=quay.io/ceph/ceph:v18, name=focused_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:54 compute-0 podman[82348]: 2025-11-26 12:37:54.062044328 +0000 UTC m=+0.017852301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:54 compute-0 podman[82348]: 2025-11-26 12:37:54.159480093 +0000 UTC m=+0.115288045 container attach d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea (image=quay.io/ceph/ceph:v18, name=focused_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 12:37:54 compute-0 ceph-mon[74966]: Updating compute-0:/var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/config/ceph.client.admin.keyring
Nov 26 12:37:54 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:54 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:54 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:54 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 26 12:37:54 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 26 12:37:54 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:54 compute-0 ceph-mon[74966]: Deploying daemon crash.compute-0 on compute-0
Nov 26 12:37:54 compute-0 podman[82406]: 2025-11-26 12:37:54.280904579 +0000 UTC m=+0.027233066 container create 3e7332a87e083e4328d645407351a983becb2661b8a10c2f82ef55cf9ce593fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53dbad30a11e7ccdbe9e150371edd8de21f2106ca7348ef8439bea6efbfa23b2/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53dbad30a11e7ccdbe9e150371edd8de21f2106ca7348ef8439bea6efbfa23b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53dbad30a11e7ccdbe9e150371edd8de21f2106ca7348ef8439bea6efbfa23b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53dbad30a11e7ccdbe9e150371edd8de21f2106ca7348ef8439bea6efbfa23b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:54 compute-0 podman[82406]: 2025-11-26 12:37:54.328182718 +0000 UTC m=+0.074511216 container init 3e7332a87e083e4328d645407351a983becb2661b8a10c2f82ef55cf9ce593fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:54 compute-0 podman[82406]: 2025-11-26 12:37:54.332001424 +0000 UTC m=+0.078329911 container start 3e7332a87e083e4328d645407351a983becb2661b8a10c2f82ef55cf9ce593fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 12:37:54 compute-0 bash[82406]: 3e7332a87e083e4328d645407351a983becb2661b8a10c2f82ef55cf9ce593fe
Nov 26 12:37:54 compute-0 podman[82406]: 2025-11-26 12:37:54.268502108 +0000 UTC m=+0.014830616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:37:54 compute-0 systemd[1]: Started Ceph crash.compute-0 for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 12:37:54 compute-0 sudo[82129]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:54 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:37:54 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:54 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:37:54 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:54 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 26 12:37:54 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:54 compute-0 ceph-mgr[75236]: [progress INFO root] complete: finished ev f99edd48-4aaf-4291-87e3-7332ae42a40e (Updating crash deployment (+1 -> 1))
Nov 26 12:37:54 compute-0 ceph-mgr[75236]: [progress INFO root] Completed event f99edd48-4aaf-4291-87e3-7332ae42a40e (Updating crash deployment (+1 -> 1)) in 1 seconds
Nov 26 12:37:54 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 26 12:37:54 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:54 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 93719bf2-5da3-4d7c-8f4f-016218e16b1c does not exist
Nov 26 12:37:54 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 12:37:54 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:54 compute-0 ceph-mgr[75236]: [progress INFO root] update: starting ev 6d0396db-e6a8-4d65-a474-a68fcf25b60b (Updating mgr deployment (+1 -> 2))
Nov 26 12:37:54 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.aefzvx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 26 12:37:54 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aefzvx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 12:37:54 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aefzvx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 26 12:37:54 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 12:37:54 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 12:37:54 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:37:54 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:54 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.aefzvx on compute-0
Nov 26 12:37:54 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.aefzvx on compute-0
Nov 26 12:37:54 compute-0 sudo[82423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:54 compute-0 sudo[82423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:54 compute-0 sudo[82423]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:54 compute-0 sudo[82450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:54 compute-0 sudo[82450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:54 compute-0 sudo[82450]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:54 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 26 12:37:54 compute-0 sudo[82492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:54 compute-0 sudo[82492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:54 compute-0 sudo[82492]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:54 compute-0 sudo[82519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:37:54 compute-0 sudo[82519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:54 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 26 12:37:54 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/469833143' entity='client.admin' 
Nov 26 12:37:54 compute-0 systemd[1]: libpod-d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea.scope: Deactivated successfully.
Nov 26 12:37:54 compute-0 podman[82348]: 2025-11-26 12:37:54.645497532 +0000 UTC m=+0.601305495 container died d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea (image=quay.io/ceph/ceph:v18, name=focused_robinson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:37:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6decc99640cb07cf5b398eacadb07d95916c2d3a333c0fd9a07e6c9e328155d-merged.mount: Deactivated successfully.
Nov 26 12:37:54 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: 2025-11-26T12:37:54.659+0000 7f48658a2640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 26 12:37:54 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: 2025-11-26T12:37:54.659+0000 7f48658a2640 -1 AuthRegistry(0x7f4860066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 26 12:37:54 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: 2025-11-26T12:37:54.663+0000 7f48658a2640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 26 12:37:54 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: 2025-11-26T12:37:54.663+0000 7f48658a2640 -1 AuthRegistry(0x7f48658a1000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 26 12:37:54 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: 2025-11-26T12:37:54.664+0000 7f485effd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 26 12:37:54 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: 2025-11-26T12:37:54.664+0000 7f48658a2640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 26 12:37:54 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 26 12:37:54 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-crash-compute-0[82418]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 26 12:37:54 compute-0 podman[82348]: 2025-11-26 12:37:54.675639219 +0000 UTC m=+0.631447172 container remove d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea (image=quay.io/ceph/ceph:v18, name=focused_robinson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 12:37:54 compute-0 systemd[1]: libpod-conmon-d19d41f6609babaab98a29e361bf2b3c4ae3ad440bc131d314d29a144f87e3ea.scope: Deactivated successfully.
Nov 26 12:37:54 compute-0 sudo[82306]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:54 compute-0 sudo[82625]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhoxhzjzazsrmptwttjxawhaaokvgjyf ; /usr/bin/python3'
Nov 26 12:37:54 compute-0 sudo[82625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:37:54 compute-0 podman[82621]: 2025-11-26 12:37:54.814867125 +0000 UTC m=+0.028819137 container create 06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 26 12:37:54 compute-0 systemd[1]: Started libpod-conmon-06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4.scope.
Nov 26 12:37:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:54 compute-0 podman[82621]: 2025-11-26 12:37:54.871344527 +0000 UTC m=+0.085296529 container init 06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 12:37:54 compute-0 podman[82621]: 2025-11-26 12:37:54.876640287 +0000 UTC m=+0.090592290 container start 06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 12:37:54 compute-0 podman[82621]: 2025-11-26 12:37:54.877806635 +0000 UTC m=+0.091758647 container attach 06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:37:54 compute-0 jovial_stonebraker[82639]: 167 167
Nov 26 12:37:54 compute-0 systemd[1]: libpod-06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4.scope: Deactivated successfully.
Nov 26 12:37:54 compute-0 conmon[82639]: conmon 06588c7a27403c10469b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4.scope/container/memory.events
Nov 26 12:37:54 compute-0 podman[82621]: 2025-11-26 12:37:54.881497059 +0000 UTC m=+0.095449091 container died 06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 12:37:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d04d48a88418ce45e2fbbe541a9d3d27eabb85063d8a7aa8c6bd5632d1806445-merged.mount: Deactivated successfully.
Nov 26 12:37:54 compute-0 podman[82621]: 2025-11-26 12:37:54.898451206 +0000 UTC m=+0.112403209 container remove 06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 12:37:54 compute-0 podman[82621]: 2025-11-26 12:37:54.802885257 +0000 UTC m=+0.016837269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:37:54 compute-0 systemd[1]: libpod-conmon-06588c7a27403c10469b0c834e62736ba5da701ddd3ec362448ad531f360bdf4.scope: Deactivated successfully.
Nov 26 12:37:54 compute-0 python3[82634]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:37:54 compute-0 systemd[1]: Reloading.
Nov 26 12:37:54 compute-0 podman[82658]: 2025-11-26 12:37:54.974221504 +0000 UTC m=+0.041853678 container create da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435 (image=quay.io/ceph/ceph:v18, name=trusting_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 12:37:54 compute-0 systemd-rc-local-generator[82686]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:37:54 compute-0 systemd-sysv-generator[82689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:37:55 compute-0 podman[82658]: 2025-11-26 12:37:54.955553067 +0000 UTC m=+0.023185260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:55 compute-0 systemd[1]: Started libpod-conmon-da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435.scope.
Nov 26 12:37:55 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fd24533ee260b91034343713d7f199970411badf127ec0a0d6c9368882659e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fd24533ee260b91034343713d7f199970411badf127ec0a0d6c9368882659e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fd24533ee260b91034343713d7f199970411badf127ec0a0d6c9368882659e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:55 compute-0 podman[82658]: 2025-11-26 12:37:55.169557426 +0000 UTC m=+0.237189610 container init da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435 (image=quay.io/ceph/ceph:v18, name=trusting_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 12:37:55 compute-0 systemd[1]: Reloading.
Nov 26 12:37:55 compute-0 podman[82658]: 2025-11-26 12:37:55.175248701 +0000 UTC m=+0.242880875 container start da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435 (image=quay.io/ceph/ceph:v18, name=trusting_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:55 compute-0 podman[82658]: 2025-11-26 12:37:55.176461727 +0000 UTC m=+0.244093901 container attach da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435 (image=quay.io/ceph/ceph:v18, name=trusting_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 12:37:55 compute-0 ceph-mon[74966]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 12:37:55 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:55 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:55 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:55 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:55 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:55 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aefzvx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 12:37:55 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aefzvx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 26 12:37:55 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 12:37:55 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:55 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/469833143' entity='client.admin' 
Nov 26 12:37:55 compute-0 systemd-sysv-generator[82736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:37:55 compute-0 systemd-rc-local-generator[82732]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:37:55 compute-0 systemd[1]: Starting Ceph mgr.compute-0.aefzvx for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 12:37:55 compute-0 podman[82807]: 2025-11-26 12:37:55.562883533 +0000 UTC m=+0.027665724 container create f14f588e6359d1a1f43de32f04f7f36967054db3683461e573c0cd77ce08f800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559938e686326156312ac3a8977cc64ca7160ce184cad6a016d72de0fb23e644/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559938e686326156312ac3a8977cc64ca7160ce184cad6a016d72de0fb23e644/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559938e686326156312ac3a8977cc64ca7160ce184cad6a016d72de0fb23e644/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559938e686326156312ac3a8977cc64ca7160ce184cad6a016d72de0fb23e644/merged/var/lib/ceph/mgr/ceph-compute-0.aefzvx supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:55 compute-0 podman[82807]: 2025-11-26 12:37:55.608877161 +0000 UTC m=+0.073659361 container init f14f588e6359d1a1f43de32f04f7f36967054db3683461e573c0cd77ce08f800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:55 compute-0 podman[82807]: 2025-11-26 12:37:55.613223943 +0000 UTC m=+0.078006143 container start f14f588e6359d1a1f43de32f04f7f36967054db3683461e573c0cd77ce08f800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 12:37:55 compute-0 bash[82807]: f14f588e6359d1a1f43de32f04f7f36967054db3683461e573c0cd77ce08f800
Nov 26 12:37:55 compute-0 podman[82807]: 2025-11-26 12:37:55.550845058 +0000 UTC m=+0.015627268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:37:55 compute-0 systemd[1]: Started Ceph mgr.compute-0.aefzvx for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 12:37:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 26 12:37:55 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2954090457' entity='client.admin' 
Nov 26 12:37:55 compute-0 sudo[82519]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:55 compute-0 ceph-mgr[82825]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 12:37:55 compute-0 ceph-mgr[82825]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 26 12:37:55 compute-0 systemd[1]: libpod-da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435.scope: Deactivated successfully.
Nov 26 12:37:55 compute-0 podman[82658]: 2025-11-26 12:37:55.647866191 +0000 UTC m=+0.715498375 container died da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435 (image=quay.io/ceph/ceph:v18, name=trusting_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 12:37:55 compute-0 ceph-mgr[82825]: pidfile_write: ignore empty --pid-file
Nov 26 12:37:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:37:55 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:37:55 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 12:37:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-85fd24533ee260b91034343713d7f199970411badf127ec0a0d6c9368882659e-merged.mount: Deactivated successfully.
Nov 26 12:37:55 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:55 compute-0 ceph-mgr[75236]: [progress INFO root] complete: finished ev 6d0396db-e6a8-4d65-a474-a68fcf25b60b (Updating mgr deployment (+1 -> 2))
Nov 26 12:37:55 compute-0 ceph-mgr[75236]: [progress INFO root] Completed event 6d0396db-e6a8-4d65-a474-a68fcf25b60b (Updating mgr deployment (+1 -> 2)) in 1 seconds
Nov 26 12:37:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 12:37:55 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:55 compute-0 podman[82658]: 2025-11-26 12:37:55.683539211 +0000 UTC m=+0.751171385 container remove da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435 (image=quay.io/ceph/ceph:v18, name=trusting_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 12:37:55 compute-0 systemd[1]: libpod-conmon-da8ad798cc794016099398f8293236389accf04268c3dd52527478d7f203e435.scope: Deactivated successfully.
Nov 26 12:37:55 compute-0 sudo[82625]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:55 compute-0 sudo[82859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:55 compute-0 sudo[82859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:55 compute-0 sudo[82859]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:55 compute-0 ceph-mgr[82825]: mgr[py] Loading python module 'alerts'
Nov 26 12:37:55 compute-0 sudo[82887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:37:55 compute-0 sudo[82887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:55 compute-0 sudo[82887]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:55 compute-0 sudo[82912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:55 compute-0 sudo[82912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:55 compute-0 sudo[82912]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:55 compute-0 ansible-async_wrapper.py[80400]: Done in kid B.
Nov 26 12:37:55 compute-0 sudo[82973]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qftzfktstgevcpgtnszykhmzoqgfswyb ; /usr/bin/python3'
Nov 26 12:37:55 compute-0 sudo[82973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:37:55 compute-0 ceph-mgr[75236]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 26 12:37:55 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:37:55 compute-0 ceph-mon[74966]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 26 12:37:55 compute-0 sudo[82947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:55 compute-0 sudo[82947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:55 compute-0 sudo[82947]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:55 compute-0 ceph-mgr[75236]: [progress INFO root] Writing back 2 completed events
Nov 26 12:37:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 12:37:55 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:55 compute-0 sudo[82988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:55 compute-0 sudo[82988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:55 compute-0 sudo[82988]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:37:55 compute-0 sudo[83013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 12:37:55 compute-0 sudo[83013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:55 compute-0 python3[82982]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:37:55 compute-0 podman[83038]: 2025-11-26 12:37:55.99409555 +0000 UTC m=+0.033057233 container create e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f (image=quay.io/ceph/ceph:v18, name=elated_nobel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 12:37:56 compute-0 systemd[1]: Started libpod-conmon-e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f.scope.
Nov 26 12:37:56 compute-0 ceph-mgr[82825]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 12:37:56 compute-0 ceph-mgr[82825]: mgr[py] Loading python module 'balancer'
Nov 26 12:37:56 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx[82821]: 2025-11-26T12:37:56.032+0000 7f1de36d7140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 12:37:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df57c8ece4108c6fdc4b35570e835b16ccca8b70c54877daef96b9dd9550b78/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df57c8ece4108c6fdc4b35570e835b16ccca8b70c54877daef96b9dd9550b78/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df57c8ece4108c6fdc4b35570e835b16ccca8b70c54877daef96b9dd9550b78/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:56 compute-0 podman[83038]: 2025-11-26 12:37:56.05042257 +0000 UTC m=+0.089384284 container init e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f (image=quay.io/ceph/ceph:v18, name=elated_nobel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 12:37:56 compute-0 podman[83038]: 2025-11-26 12:37:56.055619583 +0000 UTC m=+0.094581276 container start e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f (image=quay.io/ceph/ceph:v18, name=elated_nobel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:37:56 compute-0 podman[83038]: 2025-11-26 12:37:56.057207505 +0000 UTC m=+0.096169199 container attach e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f (image=quay.io/ceph/ceph:v18, name=elated_nobel, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:37:56 compute-0 podman[83038]: 2025-11-26 12:37:55.981548546 +0000 UTC m=+0.020510260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:56 compute-0 ceph-mon[74966]: Deploying daemon mgr.compute-0.aefzvx on compute-0
Nov 26 12:37:56 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2954090457' entity='client.admin' 
Nov 26 12:37:56 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:56 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:56 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:56 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:56 compute-0 ceph-mon[74966]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 26 12:37:56 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:56 compute-0 ceph-mgr[82825]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 12:37:56 compute-0 ceph-mgr[82825]: mgr[py] Loading python module 'cephadm'
Nov 26 12:37:56 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx[82821]: 2025-11-26T12:37:56.253+0000 7f1de36d7140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 12:37:56 compute-0 podman[83111]: 2025-11-26 12:37:56.294896886 +0000 UTC m=+0.046360489 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:56 compute-0 podman[83111]: 2025-11-26 12:37:56.368748208 +0000 UTC m=+0.120211812 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 12:37:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 26 12:37:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1251202468' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 26 12:37:56 compute-0 sudo[83013]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:37:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:37:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:37:56 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:37:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:37:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:37:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:56 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 4978d9f2-cf2b-46ba-b0b7-2bbd7d587193 does not exist
Nov 26 12:37:56 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 0b517728-5554-4b84-9b31-39c90cb6ded9 does not exist
Nov 26 12:37:56 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev e4ec4243-8d10-4ef0-bb3a-741e9da8caf0 does not exist
Nov 26 12:37:56 compute-0 sudo[83203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:56 compute-0 sudo[83203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:56 compute-0 sudo[83203]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:56 compute-0 sudo[83228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:37:56 compute-0 sudo[83228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:56 compute-0 sudo[83228]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 26 12:37:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 26 12:37:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 26 12:37:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 26 12:37:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:56 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 26 12:37:56 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 26 12:37:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 26 12:37:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 26 12:37:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 26 12:37:56 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 26 12:37:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:37:56 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:56 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 26 12:37:56 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 26 12:37:56 compute-0 sudo[83253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:56 compute-0 sudo[83253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:56 compute-0 sudo[83253]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:56 compute-0 sudo[83278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:56 compute-0 sudo[83278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:56 compute-0 sudo[83278]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:56 compute-0 sudo[83303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:56 compute-0 sudo[83303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:56 compute-0 sudo[83303]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:56 compute-0 sudo[83328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:37:56 compute-0 sudo[83328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:57 compute-0 podman[83367]: 2025-11-26 12:37:57.007508881 +0000 UTC m=+0.027932125 container create cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jackson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:37:57 compute-0 systemd[1]: Started libpod-conmon-cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655.scope.
Nov 26 12:37:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:57 compute-0 podman[83367]: 2025-11-26 12:37:57.051594995 +0000 UTC m=+0.072018239 container init cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:37:57 compute-0 podman[83367]: 2025-11-26 12:37:57.056772983 +0000 UTC m=+0.077196226 container start cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:57 compute-0 blissful_jackson[83381]: 167 167
Nov 26 12:37:57 compute-0 podman[83367]: 2025-11-26 12:37:57.060175092 +0000 UTC m=+0.080598337 container attach cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 12:37:57 compute-0 conmon[83381]: conmon cce2928a15384f2d569a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655.scope/container/memory.events
Nov 26 12:37:57 compute-0 systemd[1]: libpod-cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655.scope: Deactivated successfully.
Nov 26 12:37:57 compute-0 podman[83367]: 2025-11-26 12:37:57.06136781 +0000 UTC m=+0.081791054 container died cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jackson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 12:37:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f578a478bd6c1496ddb1e360430a80454efcee05ce98fbef0b679c8083c9dde9-merged.mount: Deactivated successfully.
Nov 26 12:37:57 compute-0 podman[83367]: 2025-11-26 12:37:57.092411888 +0000 UTC m=+0.112835132 container remove cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 12:37:57 compute-0 podman[83367]: 2025-11-26 12:37:56.995845052 +0000 UTC m=+0.016268316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:37:57 compute-0 systemd[1]: libpod-conmon-cce2928a15384f2d569a3e6ac06d1d301e96a44b213f599d74ba938499025655.scope: Deactivated successfully.
Nov 26 12:37:57 compute-0 sudo[83328]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:37:57 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:37:57 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:57 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.whkbdn (unknown last config time)...
Nov 26 12:37:57 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.whkbdn (unknown last config time)...
Nov 26 12:37:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.whkbdn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 26 12:37:57 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.whkbdn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 12:37:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 12:37:57 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 12:37:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:37:57 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:57 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.whkbdn on compute-0
Nov 26 12:37:57 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.whkbdn on compute-0
Nov 26 12:37:57 compute-0 sudo[83398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:57 compute-0 sudo[83398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:57 compute-0 sudo[83398]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 26 12:37:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 12:37:57 compute-0 ceph-mon[74966]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1251202468' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.whkbdn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 12:37:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:57 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1251202468' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 26 12:37:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 26 12:37:57 compute-0 elated_nobel[83052]: set require_min_compat_client to mimic
Nov 26 12:37:57 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 26 12:37:57 compute-0 systemd[1]: libpod-e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f.scope: Deactivated successfully.
Nov 26 12:37:57 compute-0 podman[83038]: 2025-11-26 12:37:57.228790314 +0000 UTC m=+1.267752017 container died e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f (image=quay.io/ceph/ceph:v18, name=elated_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:37:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-9df57c8ece4108c6fdc4b35570e835b16ccca8b70c54877daef96b9dd9550b78-merged.mount: Deactivated successfully.
Nov 26 12:37:57 compute-0 podman[83038]: 2025-11-26 12:37:57.255744406 +0000 UTC m=+1.294706099 container remove e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f (image=quay.io/ceph/ceph:v18, name=elated_nobel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 12:37:57 compute-0 sudo[82973]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:57 compute-0 systemd[1]: libpod-conmon-e1f50bce2d420bf0160e4d09c3e7fa76c6e42976ecd6f7370307f267d389101f.scope: Deactivated successfully.
Nov 26 12:37:57 compute-0 sudo[83424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:57 compute-0 sudo[83424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:57 compute-0 sudo[83424]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:57 compute-0 sudo[83458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:57 compute-0 sudo[83458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:57 compute-0 sudo[83458]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:57 compute-0 sudo[83483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:37:57 compute-0 sudo[83483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:57 compute-0 podman[83533]: 2025-11-26 12:37:57.562350219 +0000 UTC m=+0.030146928 container create bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 12:37:57 compute-0 systemd[1]: Started libpod-conmon-bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9.scope.
Nov 26 12:37:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:57 compute-0 podman[83533]: 2025-11-26 12:37:57.608066315 +0000 UTC m=+0.075863034 container init bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 12:37:57 compute-0 podman[83533]: 2025-11-26 12:37:57.614243495 +0000 UTC m=+0.082040203 container start bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 12:37:57 compute-0 podman[83533]: 2025-11-26 12:37:57.615865561 +0000 UTC m=+0.083662270 container attach bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 12:37:57 compute-0 serene_banzai[83550]: 167 167
Nov 26 12:37:57 compute-0 podman[83533]: 2025-11-26 12:37:57.61717612 +0000 UTC m=+0.084972830 container died bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:37:57 compute-0 systemd[1]: libpod-bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9.scope: Deactivated successfully.
Nov 26 12:37:57 compute-0 sudo[83573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geckrqyjsiagcxryiskdsgdjjvmbfoaz ; /usr/bin/python3'
Nov 26 12:37:57 compute-0 sudo[83573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:37:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f7b3c956fa6ba538c24c844caa8d8ca3eda3147aac2ed3dd6174b1d8585d929-merged.mount: Deactivated successfully.
Nov 26 12:37:57 compute-0 podman[83533]: 2025-11-26 12:37:57.637600067 +0000 UTC m=+0.105396776 container remove bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 12:37:57 compute-0 podman[83533]: 2025-11-26 12:37:57.548646075 +0000 UTC m=+0.016442795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:37:57 compute-0 systemd[1]: libpod-conmon-bc33641fc4e2c2f195c92d81f84e162192f70476cbcba780750a8afc0da1f9a9.scope: Deactivated successfully.
Nov 26 12:37:57 compute-0 sudo[83483]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:37:57 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:37:57 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:57 compute-0 sudo[83590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:57 compute-0 sudo[83590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:57 compute-0 sudo[83590]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:57 compute-0 python3[83582]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:37:57 compute-0 sudo[83615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:57 compute-0 sudo[83615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:57 compute-0 sudo[83615]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:57 compute-0 podman[83638]: 2025-11-26 12:37:57.797622946 +0000 UTC m=+0.030723353 container create 5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f (image=quay.io/ceph/ceph:v18, name=jovial_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 26 12:37:57 compute-0 systemd[1]: Started libpod-conmon-5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f.scope.
Nov 26 12:37:57 compute-0 sudo[83645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:57 compute-0 sudo[83645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:57 compute-0 sudo[83645]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d503511bfccc15a1086f6f507fb0f8fa5cb81e727cae50dd7b3bf6b3fc51c4f0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d503511bfccc15a1086f6f507fb0f8fa5cb81e727cae50dd7b3bf6b3fc51c4f0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d503511bfccc15a1086f6f507fb0f8fa5cb81e727cae50dd7b3bf6b3fc51c4f0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:57 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:37:57 compute-0 podman[83638]: 2025-11-26 12:37:57.846419427 +0000 UTC m=+0.079519834 container init 5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f (image=quay.io/ceph/ceph:v18, name=jovial_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:57 compute-0 podman[83638]: 2025-11-26 12:37:57.85252928 +0000 UTC m=+0.085629677 container start 5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f (image=quay.io/ceph/ceph:v18, name=jovial_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 12:37:57 compute-0 podman[83638]: 2025-11-26 12:37:57.853836614 +0000 UTC m=+0.086937011 container attach 5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f (image=quay.io/ceph/ceph:v18, name=jovial_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:57 compute-0 podman[83638]: 2025-11-26 12:37:57.78662922 +0000 UTC m=+0.019729637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:57 compute-0 sudo[83680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 12:37:57 compute-0 sudo[83680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:57 compute-0 ceph-mgr[82825]: mgr[py] Loading python module 'crash'
Nov 26 12:37:58 compute-0 ceph-mgr[82825]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 12:37:58 compute-0 ceph-mgr[82825]: mgr[py] Loading python module 'dashboard'
Nov 26 12:37:58 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx[82821]: 2025-11-26T12:37:58.164+0000 7f1de36d7140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 12:37:58 compute-0 ceph-mon[74966]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 26 12:37:58 compute-0 ceph-mon[74966]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 26 12:37:58 compute-0 ceph-mon[74966]: Reconfiguring mgr.compute-0.whkbdn (unknown last config time)...
Nov 26 12:37:58 compute-0 ceph-mon[74966]: Reconfiguring daemon mgr.compute-0.whkbdn on compute-0
Nov 26 12:37:58 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1251202468' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 26 12:37:58 compute-0 ceph-mon[74966]: osdmap e3: 0 total, 0 up, 0 in
Nov 26 12:37:58 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:58 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:58 compute-0 podman[83778]: 2025-11-26 12:37:58.238087649 +0000 UTC m=+0.043343576 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:37:58 compute-0 podman[83778]: 2025-11-26 12:37:58.321035262 +0000 UTC m=+0.126291168 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:37:58 compute-0 sudo[83796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:58 compute-0 sudo[83796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:58 compute-0 sudo[83796]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:58 compute-0 sudo[83844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:58 compute-0 sudo[83844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:58 compute-0 sudo[83844]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:58 compute-0 sudo[83885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:58 compute-0 sudo[83885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:58 compute-0 sudo[83885]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:58 compute-0 sudo[83680]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:58 compute-0 sudo[83925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 26 12:37:58 compute-0 sudo[83925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev dbb7ecfc-dd9a-4860-bb18-1201c5fedcc7 does not exist
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 9796111b-7f3b-445b-a875-4db25f177569 does not exist
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev e471cedc-a121-4f4c-ab0f-1d8fdb4853f2 does not exist
Nov 26 12:37:58 compute-0 sudo[83953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:58 compute-0 sudo[83953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:58 compute-0 sudo[83953]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:58 compute-0 sudo[83978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:37:58 compute-0 sudo[83978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:58 compute-0 sudo[83978]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:58 compute-0 sudo[83925]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: [cephadm INFO root] Added host compute-0
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 57414f9b-0314-4361-be85-7eb871bf57d4 does not exist
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: [progress INFO root] update: starting ev 13ef4d25-9675-4f68-842f-f29e4ba7da32 (Updating mgr deployment (-1 -> 1))
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.aefzvx from compute-0 -- ports [8765]
Nov 26 12:37:58 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.aefzvx from compute-0 -- ports [8765]
Nov 26 12:37:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:58 compute-0 jovial_cartwright[83676]: Added host 'compute-0' with addr '192.168.122.100'
Nov 26 12:37:58 compute-0 jovial_cartwright[83676]: Scheduled mon update...
Nov 26 12:37:58 compute-0 jovial_cartwright[83676]: Scheduled mgr update...
Nov 26 12:37:58 compute-0 jovial_cartwright[83676]: Scheduled osd.default_drive_group update...
Nov 26 12:37:58 compute-0 systemd[1]: libpod-5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f.scope: Deactivated successfully.
Nov 26 12:37:58 compute-0 podman[84033]: 2025-11-26 12:37:58.828803357 +0000 UTC m=+0.016691463 container died 5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f (image=quay.io/ceph/ceph:v18, name=jovial_cartwright, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 12:37:58 compute-0 sudo[84021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:58 compute-0 sudo[84021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:58 compute-0 sudo[84021]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d503511bfccc15a1086f6f507fb0f8fa5cb81e727cae50dd7b3bf6b3fc51c4f0-merged.mount: Deactivated successfully.
Nov 26 12:37:58 compute-0 podman[84033]: 2025-11-26 12:37:58.855188166 +0000 UTC m=+0.043076272 container remove 5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f (image=quay.io/ceph/ceph:v18, name=jovial_cartwright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:37:58 compute-0 systemd[1]: libpod-conmon-5ab36972bf96b7f9bdb3f2da8854efdd108611c1873282a37ddb53ee5ef5101f.scope: Deactivated successfully.
Nov 26 12:37:58 compute-0 sudo[83573]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:58 compute-0 sudo[84057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:58 compute-0 sudo[84057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:58 compute-0 sudo[84057]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:58 compute-0 sudo[84082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:58 compute-0 sudo[84082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:58 compute-0 sudo[84082]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:58 compute-0 sudo[84107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --name mgr.compute-0.aefzvx --force --tcp-ports 8765
Nov 26 12:37:58 compute-0 sudo[84107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:59 compute-0 sudo[84155]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rosxhbnrhtybwbnjigkbenjzckahjgum ; /usr/bin/python3'
Nov 26 12:37:59 compute-0 sudo[84155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:37:59 compute-0 python3[84157]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:37:59 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.aefzvx for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 12:37:59 compute-0 ceph-mon[74966]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:37:59 compute-0 podman[84190]: 2025-11-26 12:37:59.213417016 +0000 UTC m=+0.028961244 container create 91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18 (image=quay.io/ceph/ceph:v18, name=compassionate_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:59 compute-0 systemd[1]: Started libpod-conmon-91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18.scope.
Nov 26 12:37:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2346db4741e3c3384eb9ffbaa68dadbcecff621a873cc78f5fdfcfd2f0d69362/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2346db4741e3c3384eb9ffbaa68dadbcecff621a873cc78f5fdfcfd2f0d69362/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2346db4741e3c3384eb9ffbaa68dadbcecff621a873cc78f5fdfcfd2f0d69362/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 12:37:59 compute-0 podman[84190]: 2025-11-26 12:37:59.266170482 +0000 UTC m=+0.081714720 container init 91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18 (image=quay.io/ceph/ceph:v18, name=compassionate_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 12:37:59 compute-0 podman[84190]: 2025-11-26 12:37:59.272267141 +0000 UTC m=+0.087811369 container start 91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18 (image=quay.io/ceph/ceph:v18, name=compassionate_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:37:59 compute-0 podman[84190]: 2025-11-26 12:37:59.273594672 +0000 UTC m=+0.089138900 container attach 91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18 (image=quay.io/ceph/ceph:v18, name=compassionate_goldwasser, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:59 compute-0 podman[84190]: 2025-11-26 12:37:59.201583797 +0000 UTC m=+0.017128046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:37:59 compute-0 podman[84228]: 2025-11-26 12:37:59.373104214 +0000 UTC m=+0.044909147 container died f14f588e6359d1a1f43de32f04f7f36967054db3683461e573c0cd77ce08f800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:37:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-559938e686326156312ac3a8977cc64ca7160ce184cad6a016d72de0fb23e644-merged.mount: Deactivated successfully.
Nov 26 12:37:59 compute-0 podman[84228]: 2025-11-26 12:37:59.394645835 +0000 UTC m=+0.066450768 container remove f14f588e6359d1a1f43de32f04f7f36967054db3683461e573c0cd77ce08f800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 12:37:59 compute-0 bash[84228]: ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-aefzvx
Nov 26 12:37:59 compute-0 systemd[1]: ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@mgr.compute-0.aefzvx.service: Main process exited, code=exited, status=143/n/a
Nov 26 12:37:59 compute-0 systemd[1]: ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@mgr.compute-0.aefzvx.service: Failed with result 'exit-code'.
Nov 26 12:37:59 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.aefzvx for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 12:37:59 compute-0 systemd[1]: ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@mgr.compute-0.aefzvx.service: Consumed 4.176s CPU time.
Nov 26 12:37:59 compute-0 systemd[1]: Reloading.
Nov 26 12:37:59 compute-0 systemd-rc-local-generator[84312]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:37:59 compute-0 systemd-sysv-generator[84316]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:37:59 compute-0 sudo[84107]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:59 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.aefzvx
Nov 26 12:37:59 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.aefzvx
Nov 26 12:37:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.aefzvx"} v 0) v1
Nov 26 12:37:59 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.aefzvx"}]: dispatch
Nov 26 12:37:59 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.aefzvx"}]': finished
Nov 26 12:37:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 12:37:59 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:59 compute-0 ceph-mgr[75236]: [progress INFO root] complete: finished ev 13ef4d25-9675-4f68-842f-f29e4ba7da32 (Updating mgr deployment (-1 -> 1))
Nov 26 12:37:59 compute-0 ceph-mgr[75236]: [progress INFO root] Completed event 13ef4d25-9675-4f68-842f-f29e4ba7da32 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Nov 26 12:37:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 12:37:59 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:37:59 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 6e52240c-0cee-499c-8ce3-442d359e171d does not exist
Nov 26 12:37:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 12:37:59 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2793246863' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 12:37:59 compute-0 compassionate_goldwasser[84212]: 
Nov 26 12:37:59 compute-0 compassionate_goldwasser[84212]: {"fsid":"f7d7fe93-41e5-51c4-b72d-63b38686102e","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":63,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-26T12:36:53.922147+0000","services":{}},"progress_events":{}}
Nov 26 12:37:59 compute-0 sudo[84329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:59 compute-0 sudo[84329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:59 compute-0 sudo[84329]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:59 compute-0 systemd[1]: libpod-91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18.scope: Deactivated successfully.
Nov 26 12:37:59 compute-0 conmon[84212]: conmon 91e14f4d9b8f90d525cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18.scope/container/memory.events
Nov 26 12:37:59 compute-0 podman[84190]: 2025-11-26 12:37:59.812268636 +0000 UTC m=+0.627812863 container died 91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18 (image=quay.io/ceph/ceph:v18, name=compassionate_goldwasser, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 12:37:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-2346db4741e3c3384eb9ffbaa68dadbcecff621a873cc78f5fdfcfd2f0d69362-merged.mount: Deactivated successfully.
Nov 26 12:37:59 compute-0 podman[84190]: 2025-11-26 12:37:59.837548601 +0000 UTC m=+0.653092830 container remove 91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18 (image=quay.io/ceph/ceph:v18, name=compassionate_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 26 12:37:59 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:37:59 compute-0 systemd[1]: libpod-conmon-91e14f4d9b8f90d525cd750d784a584b294f819f498ea47b507ce773006dea18.scope: Deactivated successfully.
Nov 26 12:37:59 compute-0 sudo[84155]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:59 compute-0 sudo[84356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:37:59 compute-0 sudo[84356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:59 compute-0 sudo[84356]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:59 compute-0 sudo[84390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:59 compute-0 sudo[84390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:59 compute-0 sudo[84390]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:59 compute-0 sudo[84415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:37:59 compute-0 sudo[84415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:59 compute-0 sudo[84415]: pam_unix(sudo:session): session closed for user root
Nov 26 12:37:59 compute-0 sudo[84440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:37:59 compute-0 sudo[84440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:37:59 compute-0 sudo[84440]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:00 compute-0 sudo[84465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 12:38:00 compute-0 sudo[84465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:00 compute-0 ceph-mon[74966]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:38:00 compute-0 ceph-mon[74966]: Added host compute-0
Nov 26 12:38:00 compute-0 ceph-mon[74966]: Saving service mon spec with placement compute-0
Nov 26 12:38:00 compute-0 ceph-mon[74966]: Saving service mgr spec with placement compute-0
Nov 26 12:38:00 compute-0 ceph-mon[74966]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 26 12:38:00 compute-0 ceph-mon[74966]: Saving service osd.default_drive_group spec with placement compute-0
Nov 26 12:38:00 compute-0 ceph-mon[74966]: Removing daemon mgr.compute-0.aefzvx from compute-0 -- ports [8765]
Nov 26 12:38:00 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.aefzvx"}]: dispatch
Nov 26 12:38:00 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.aefzvx"}]': finished
Nov 26 12:38:00 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:00 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:00 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2793246863' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 12:38:00 compute-0 podman[84547]: 2025-11-26 12:38:00.348383236 +0000 UTC m=+0.037098899 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 12:38:00 compute-0 podman[84547]: 2025-11-26 12:38:00.427951481 +0000 UTC m=+0.116667142 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:00 compute-0 sudo[84465]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:38:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:38:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:38:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:38:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:38:00 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:38:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:38:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:38:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:00 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 3b189e6a-9256-4375-8ac6-31a8b026514d does not exist
Nov 26 12:38:00 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev f5ef6df5-8cf9-46e6-9182-84070a9e5bc1 does not exist
Nov 26 12:38:00 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 4c9deb10-729a-4bcc-a1fe-44a626703b19 does not exist
Nov 26 12:38:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:38:00 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:38:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:38:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:38:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:38:00 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:00 compute-0 sudo[84602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:00 compute-0 sudo[84602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:00 compute-0 sudo[84602]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:00 compute-0 sudo[84627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:00 compute-0 sudo[84627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:00 compute-0 sudo[84627]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:00 compute-0 sudo[84652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:00 compute-0 sudo[84652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:00 compute-0 sudo[84652]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:00 compute-0 sudo[84677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:38:00 compute-0 sudo[84677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:00 compute-0 ceph-mgr[75236]: [progress INFO root] Writing back 3 completed events
Nov 26 12:38:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 12:38:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:38:00 compute-0 podman[84734]: 2025-11-26 12:38:00.956010962 +0000 UTC m=+0.026877077 container create d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_herschel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 12:38:00 compute-0 systemd[1]: Started libpod-conmon-d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b.scope.
Nov 26 12:38:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:01 compute-0 podman[84734]: 2025-11-26 12:38:01.010497464 +0000 UTC m=+0.081363579 container init d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_herschel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 12:38:01 compute-0 podman[84734]: 2025-11-26 12:38:01.015633563 +0000 UTC m=+0.086499677 container start d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_herschel, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 12:38:01 compute-0 podman[84734]: 2025-11-26 12:38:01.01679448 +0000 UTC m=+0.087660594 container attach d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 12:38:01 compute-0 cool_herschel[84747]: 167 167
Nov 26 12:38:01 compute-0 systemd[1]: libpod-d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b.scope: Deactivated successfully.
Nov 26 12:38:01 compute-0 podman[84734]: 2025-11-26 12:38:01.019049289 +0000 UTC m=+0.089915423 container died d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c1542da40b741c5f8281c9e10e28b40e228a58ae1498df2c948c904519f07f2-merged.mount: Deactivated successfully.
Nov 26 12:38:01 compute-0 podman[84734]: 2025-11-26 12:38:01.035024873 +0000 UTC m=+0.105890987 container remove d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:01 compute-0 podman[84734]: 2025-11-26 12:38:00.944641718 +0000 UTC m=+0.015507852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:01 compute-0 systemd[1]: libpod-conmon-d3c67952ac350d7470c226d284f2c8c653f84414f43e63e8e30211213226426b.scope: Deactivated successfully.
Nov 26 12:38:01 compute-0 podman[84769]: 2025-11-26 12:38:01.143690087 +0000 UTC m=+0.025103876 container create 838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:01 compute-0 systemd[1]: Started libpod-conmon-838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038.scope.
Nov 26 12:38:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f6cc841334fcd28e003b78085738ac6013700560359f409e705c8fff0ab5d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f6cc841334fcd28e003b78085738ac6013700560359f409e705c8fff0ab5d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f6cc841334fcd28e003b78085738ac6013700560359f409e705c8fff0ab5d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f6cc841334fcd28e003b78085738ac6013700560359f409e705c8fff0ab5d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f6cc841334fcd28e003b78085738ac6013700560359f409e705c8fff0ab5d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:01 compute-0 podman[84769]: 2025-11-26 12:38:01.205656573 +0000 UTC m=+0.087070372 container init 838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 12:38:01 compute-0 podman[84769]: 2025-11-26 12:38:01.210014446 +0000 UTC m=+0.091428234 container start 838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 12:38:01 compute-0 podman[84769]: 2025-11-26 12:38:01.210984904 +0000 UTC m=+0.092398693 container attach 838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:01 compute-0 ceph-mon[74966]: Removing key for mgr.compute-0.aefzvx
Nov 26 12:38:01 compute-0 ceph-mon[74966]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:38:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:38:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:38:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:01 compute-0 podman[84769]: 2025-11-26 12:38:01.133496959 +0000 UTC m=+0.014910758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:01 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:02 compute-0 flamboyant_proskuriakova[84782]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:38:02 compute-0 flamboyant_proskuriakova[84782]: --> relative data size: 1.0
Nov 26 12:38:02 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 12:38:02 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ef2b480d-9484-4a2f-b46e-f0af80cc4943
Nov 26 12:38:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943"} v 0) v1
Nov 26 12:38:02 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2458524021' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943"}]: dispatch
Nov 26 12:38:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 26 12:38:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 12:38:02 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2458524021' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943"}]': finished
Nov 26 12:38:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 26 12:38:02 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 26 12:38:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 12:38:02 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:02 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 12:38:02 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 12:38:02 compute-0 lvm[84843]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 12:38:02 compute-0 lvm[84843]: VG ceph_vg0 finished
Nov 26 12:38:02 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 26 12:38:02 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 26 12:38:02 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 12:38:02 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 12:38:02 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 26 12:38:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 26 12:38:02 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4214126572' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 12:38:02 compute-0 flamboyant_proskuriakova[84782]:  stderr: got monmap epoch 1
Nov 26 12:38:02 compute-0 flamboyant_proskuriakova[84782]: --> Creating keyring file for osd.0
Nov 26 12:38:02 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 26 12:38:02 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 26 12:38:02 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid ef2b480d-9484-4a2f-b46e-f0af80cc4943 --setuser ceph --setgroup ceph
Nov 26 12:38:03 compute-0 ceph-mon[74966]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:03 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2458524021' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943"}]: dispatch
Nov 26 12:38:03 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2458524021' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943"}]': finished
Nov 26 12:38:03 compute-0 ceph-mon[74966]: osdmap e4: 1 total, 0 up, 1 in
Nov 26 12:38:03 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:03 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/4214126572' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 12:38:03 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:04 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 26 12:38:04 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 26 12:38:04 compute-0 ceph-mon[74966]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 26 12:38:04 compute-0 ceph-mon[74966]: Cluster is now healthy
Nov 26 12:38:04 compute-0 flamboyant_proskuriakova[84782]:  stderr: 2025-11-26T12:38:02.847+0000 7f102af5e740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 12:38:04 compute-0 flamboyant_proskuriakova[84782]:  stderr: 2025-11-26T12:38:02.847+0000 7f102af5e740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 12:38:04 compute-0 flamboyant_proskuriakova[84782]:  stderr: 2025-11-26T12:38:02.848+0000 7f102af5e740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 12:38:04 compute-0 flamboyant_proskuriakova[84782]:  stderr: 2025-11-26T12:38:02.848+0000 7f102af5e740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 26 12:38:04 compute-0 flamboyant_proskuriakova[84782]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 241a5bb6-a0a2-4f46-939e-db435256704f
Nov 26 12:38:05 compute-0 ceph-mon[74966]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "241a5bb6-a0a2-4f46-939e-db435256704f"} v 0) v1
Nov 26 12:38:05 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1786557833' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "241a5bb6-a0a2-4f46-939e-db435256704f"}]: dispatch
Nov 26 12:38:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 26 12:38:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 12:38:05 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1786557833' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "241a5bb6-a0a2-4f46-939e-db435256704f"}]': finished
Nov 26 12:38:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 26 12:38:05 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 26 12:38:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 12:38:05 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 12:38:05 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:05 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 12:38:05 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 12:38:05 compute-0 lvm[85775]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 12:38:05 compute-0 lvm[85775]: VG ceph_vg1 finished
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 26 12:38:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 26 12:38:05 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/714457435' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]:  stderr: got monmap epoch 1
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: --> Creating keyring file for osd.1
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 26 12:38:05 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 241a5bb6-a0a2-4f46-939e-db435256704f --setuser ceph --setgroup ceph
Nov 26 12:38:05 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:38:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:38:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:38:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:38:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:38:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:38:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:38:06 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1786557833' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "241a5bb6-a0a2-4f46-939e-db435256704f"}]: dispatch
Nov 26 12:38:06 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1786557833' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "241a5bb6-a0a2-4f46-939e-db435256704f"}]': finished
Nov 26 12:38:06 compute-0 ceph-mon[74966]: osdmap e5: 2 total, 0 up, 2 in
Nov 26 12:38:06 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:06 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:06 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/714457435' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 12:38:07 compute-0 ceph-mon[74966]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:07 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:07 compute-0 flamboyant_proskuriakova[84782]:  stderr: 2025-11-26T12:38:05.863+0000 7ffb26655740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 12:38:07 compute-0 flamboyant_proskuriakova[84782]:  stderr: 2025-11-26T12:38:05.863+0000 7ffb26655740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 12:38:07 compute-0 flamboyant_proskuriakova[84782]:  stderr: 2025-11-26T12:38:05.863+0000 7ffb26655740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 12:38:07 compute-0 flamboyant_proskuriakova[84782]:  stderr: 2025-11-26T12:38:05.863+0000 7ffb26655740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 26 12:38:07 compute-0 flamboyant_proskuriakova[84782]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 830db782-65d7-4e18-bccf-dab0d5334a8b
Nov 26 12:38:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b"} v 0) v1
Nov 26 12:38:08 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3656312750' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b"}]: dispatch
Nov 26 12:38:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 26 12:38:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 12:38:08 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3656312750' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b"}]': finished
Nov 26 12:38:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 26 12:38:08 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 26 12:38:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 12:38:08 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 12:38:08 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 12:38:08 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 12:38:08 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:08 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 12:38:08 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 12:38:08 compute-0 lvm[86707]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 12:38:08 compute-0 lvm[86707]: VG ceph_vg2 finished
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 26 12:38:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 26 12:38:08 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3345018760' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]:  stderr: got monmap epoch 1
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: --> Creating keyring file for osd.2
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 26 12:38:08 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 830db782-65d7-4e18-bccf-dab0d5334a8b --setuser ceph --setgroup ceph
Nov 26 12:38:09 compute-0 ceph-mon[74966]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:09 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3656312750' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b"}]: dispatch
Nov 26 12:38:09 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3656312750' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b"}]': finished
Nov 26 12:38:09 compute-0 ceph-mon[74966]: osdmap e6: 3 total, 0 up, 3 in
Nov 26 12:38:09 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:09 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:09 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:09 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3345018760' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 12:38:09 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:10 compute-0 ceph-mon[74966]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:38:10 compute-0 flamboyant_proskuriakova[84782]:  stderr: 2025-11-26T12:38:08.874+0000 7f0a84c45740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 12:38:10 compute-0 flamboyant_proskuriakova[84782]:  stderr: 2025-11-26T12:38:08.874+0000 7f0a84c45740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 12:38:10 compute-0 flamboyant_proskuriakova[84782]:  stderr: 2025-11-26T12:38:08.874+0000 7f0a84c45740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 12:38:10 compute-0 flamboyant_proskuriakova[84782]:  stderr: 2025-11-26T12:38:08.874+0000 7f0a84c45740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 26 12:38:10 compute-0 flamboyant_proskuriakova[84782]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 26 12:38:11 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 12:38:11 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 26 12:38:11 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 12:38:11 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 26 12:38:11 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 12:38:11 compute-0 flamboyant_proskuriakova[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 12:38:11 compute-0 flamboyant_proskuriakova[84782]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 26 12:38:11 compute-0 flamboyant_proskuriakova[84782]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 26 12:38:11 compute-0 systemd[1]: libpod-838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038.scope: Deactivated successfully.
Nov 26 12:38:11 compute-0 systemd[1]: libpod-838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038.scope: Consumed 4.085s CPU time.
Nov 26 12:38:11 compute-0 conmon[84782]: conmon 838032c3aa40db808c90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038.scope/container/memory.events
Nov 26 12:38:11 compute-0 podman[87609]: 2025-11-26 12:38:11.131892388 +0000 UTC m=+0.018946197 container died 838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 12:38:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-11f6cc841334fcd28e003b78085738ac6013700560359f409e705c8fff0ab5d1-merged.mount: Deactivated successfully.
Nov 26 12:38:11 compute-0 podman[87609]: 2025-11-26 12:38:11.166181755 +0000 UTC m=+0.053235543 container remove 838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 12:38:11 compute-0 systemd[1]: libpod-conmon-838032c3aa40db808c90baa7668cc5731707a8e65883d0d92f7ebbec42e53038.scope: Deactivated successfully.
Nov 26 12:38:11 compute-0 sudo[84677]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:11 compute-0 sudo[87621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:11 compute-0 sudo[87621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:11 compute-0 sudo[87621]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:11 compute-0 sudo[87646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:11 compute-0 sudo[87646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:11 compute-0 sudo[87646]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:11 compute-0 sudo[87671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:11 compute-0 sudo[87671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:11 compute-0 sudo[87671]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:11 compute-0 sudo[87696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:38:11 compute-0 sudo[87696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:11 compute-0 podman[87751]: 2025-11-26 12:38:11.571037247 +0000 UTC m=+0.024449169 container create 8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:11 compute-0 systemd[1]: Started libpod-conmon-8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02.scope.
Nov 26 12:38:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:11 compute-0 podman[87751]: 2025-11-26 12:38:11.616982776 +0000 UTC m=+0.070394708 container init 8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 12:38:11 compute-0 podman[87751]: 2025-11-26 12:38:11.621243308 +0000 UTC m=+0.074655230 container start 8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 12:38:11 compute-0 podman[87751]: 2025-11-26 12:38:11.622552866 +0000 UTC m=+0.075964788 container attach 8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:11 compute-0 clever_goodall[87764]: 167 167
Nov 26 12:38:11 compute-0 systemd[1]: libpod-8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02.scope: Deactivated successfully.
Nov 26 12:38:11 compute-0 podman[87751]: 2025-11-26 12:38:11.624615307 +0000 UTC m=+0.078027230 container died 8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 12:38:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-43fb2278a5b006dc12500b7d5251b031172ee98c7dc2ca7ffcb15b2eae6d4714-merged.mount: Deactivated successfully.
Nov 26 12:38:11 compute-0 podman[87751]: 2025-11-26 12:38:11.642132661 +0000 UTC m=+0.095544583 container remove 8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 12:38:11 compute-0 podman[87751]: 2025-11-26 12:38:11.561409331 +0000 UTC m=+0.014821253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:11 compute-0 systemd[1]: libpod-conmon-8d162977a3f29239cf077b028780579d528ba977987d9441898d8766d658bd02.scope: Deactivated successfully.
Nov 26 12:38:11 compute-0 podman[87786]: 2025-11-26 12:38:11.750673752 +0000 UTC m=+0.027722283 container create 5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:11 compute-0 systemd[1]: Started libpod-conmon-5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682.scope.
Nov 26 12:38:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c64569f8dc0b05e5bf31d462fe794e85639caee12b037fad0d1090672de2325/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c64569f8dc0b05e5bf31d462fe794e85639caee12b037fad0d1090672de2325/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c64569f8dc0b05e5bf31d462fe794e85639caee12b037fad0d1090672de2325/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c64569f8dc0b05e5bf31d462fe794e85639caee12b037fad0d1090672de2325/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:11 compute-0 podman[87786]: 2025-11-26 12:38:11.809436662 +0000 UTC m=+0.086485192 container init 5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:11 compute-0 podman[87786]: 2025-11-26 12:38:11.814819046 +0000 UTC m=+0.091867576 container start 5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:11 compute-0 podman[87786]: 2025-11-26 12:38:11.816135998 +0000 UTC m=+0.093184548 container attach 5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 12:38:11 compute-0 podman[87786]: 2025-11-26 12:38:11.738047574 +0000 UTC m=+0.015096123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:11 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]: {
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:     "0": [
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:         {
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "devices": [
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "/dev/loop3"
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             ],
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "lv_name": "ceph_lv0",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "lv_size": "21470642176",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "name": "ceph_lv0",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "tags": {
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.cluster_name": "ceph",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.crush_device_class": "",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.encrypted": "0",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.osd_id": "0",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.type": "block",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.vdo": "0"
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             },
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "type": "block",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "vg_name": "ceph_vg0"
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:         }
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:     ],
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:     "1": [
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:         {
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "devices": [
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "/dev/loop4"
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             ],
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "lv_name": "ceph_lv1",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "lv_size": "21470642176",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "name": "ceph_lv1",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "tags": {
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.cluster_name": "ceph",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.crush_device_class": "",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.encrypted": "0",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.osd_id": "1",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.type": "block",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.vdo": "0"
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             },
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "type": "block",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "vg_name": "ceph_vg1"
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:         }
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:     ],
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:     "2": [
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:         {
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "devices": [
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "/dev/loop5"
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             ],
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "lv_name": "ceph_lv2",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "lv_size": "21470642176",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "name": "ceph_lv2",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "tags": {
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.cluster_name": "ceph",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.crush_device_class": "",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.encrypted": "0",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.osd_id": "2",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.type": "block",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:                 "ceph.vdo": "0"
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             },
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "type": "block",
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:             "vg_name": "ceph_vg2"
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:         }
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]:     ]
Nov 26 12:38:12 compute-0 upbeat_hellman[87799]: }
Nov 26 12:38:12 compute-0 systemd[1]: libpod-5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682.scope: Deactivated successfully.
Nov 26 12:38:12 compute-0 conmon[87799]: conmon 5dbb57fd5389f5646a0f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682.scope/container/memory.events
Nov 26 12:38:12 compute-0 podman[87786]: 2025-11-26 12:38:12.443531439 +0000 UTC m=+0.720579969 container died 5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c64569f8dc0b05e5bf31d462fe794e85639caee12b037fad0d1090672de2325-merged.mount: Deactivated successfully.
Nov 26 12:38:12 compute-0 podman[87786]: 2025-11-26 12:38:12.472945531 +0000 UTC m=+0.749994060 container remove 5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 12:38:12 compute-0 systemd[1]: libpod-conmon-5dbb57fd5389f5646a0f4ef0a046e2918fbcf024593f27f5776278da97973682.scope: Deactivated successfully.
Nov 26 12:38:12 compute-0 sudo[87696]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:12 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 26 12:38:12 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 26 12:38:12 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:38:12 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:12 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 26 12:38:12 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 26 12:38:12 compute-0 sudo[87818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:12 compute-0 sudo[87818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:12 compute-0 sudo[87818]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:12 compute-0 sudo[87843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:12 compute-0 sudo[87843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:12 compute-0 sudo[87843]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:12 compute-0 sudo[87868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:12 compute-0 sudo[87868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:12 compute-0 sudo[87868]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:12 compute-0 sudo[87893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:38:12 compute-0 sudo[87893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:12 compute-0 podman[87951]: 2025-11-26 12:38:12.87395128 +0000 UTC m=+0.028551942 container create a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 12:38:12 compute-0 ceph-mon[74966]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:12 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 26 12:38:12 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:12 compute-0 systemd[1]: Started libpod-conmon-a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe.scope.
Nov 26 12:38:12 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:12 compute-0 podman[87951]: 2025-11-26 12:38:12.914816737 +0000 UTC m=+0.069417400 container init a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 12:38:12 compute-0 podman[87951]: 2025-11-26 12:38:12.918914651 +0000 UTC m=+0.073515314 container start a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:12 compute-0 podman[87951]: 2025-11-26 12:38:12.919854759 +0000 UTC m=+0.074455422 container attach a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:12 compute-0 distracted_brahmagupta[87964]: 167 167
Nov 26 12:38:12 compute-0 systemd[1]: libpod-a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe.scope: Deactivated successfully.
Nov 26 12:38:12 compute-0 conmon[87964]: conmon a78b6e67184336dc778b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe.scope/container/memory.events
Nov 26 12:38:12 compute-0 podman[87951]: 2025-11-26 12:38:12.92231852 +0000 UTC m=+0.076919184 container died a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9866a1db4e48070e20337cc47024546cf0fb860e65ab7b2ea97c9f468b556f6a-merged.mount: Deactivated successfully.
Nov 26 12:38:12 compute-0 podman[87951]: 2025-11-26 12:38:12.938357125 +0000 UTC m=+0.092957789 container remove a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:12 compute-0 podman[87951]: 2025-11-26 12:38:12.863338049 +0000 UTC m=+0.017938732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:12 compute-0 systemd[1]: libpod-conmon-a78b6e67184336dc778b2a9c1ce8e405bfbfc21db4fc715a2bf703cd231a93fe.scope: Deactivated successfully.
Nov 26 12:38:13 compute-0 podman[87994]: 2025-11-26 12:38:13.105012861 +0000 UTC m=+0.023585024 container create 10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:13 compute-0 systemd[1]: Started libpod-conmon-10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4.scope.
Nov 26 12:38:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5b22687b3c8cee60178554bbed41f4d8ff2f79a5498254b9087800b3dd969f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5b22687b3c8cee60178554bbed41f4d8ff2f79a5498254b9087800b3dd969f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5b22687b3c8cee60178554bbed41f4d8ff2f79a5498254b9087800b3dd969f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5b22687b3c8cee60178554bbed41f4d8ff2f79a5498254b9087800b3dd969f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5b22687b3c8cee60178554bbed41f4d8ff2f79a5498254b9087800b3dd969f/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:13 compute-0 podman[87994]: 2025-11-26 12:38:13.15407138 +0000 UTC m=+0.072643563 container init 10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 12:38:13 compute-0 podman[87994]: 2025-11-26 12:38:13.160401958 +0000 UTC m=+0.078974121 container start 10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:13 compute-0 podman[87994]: 2025-11-26 12:38:13.164210103 +0000 UTC m=+0.082782266 container attach 10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 12:38:13 compute-0 podman[87994]: 2025-11-26 12:38:13.095960043 +0000 UTC m=+0.014532225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:13 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test[88007]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 26 12:38:13 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test[88007]:                             [--no-systemd] [--no-tmpfs]
Nov 26 12:38:13 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test[88007]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 26 12:38:13 compute-0 systemd[1]: libpod-10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4.scope: Deactivated successfully.
Nov 26 12:38:13 compute-0 podman[87994]: 2025-11-26 12:38:13.708491401 +0000 UTC m=+0.627063563 container died 10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 12:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee5b22687b3c8cee60178554bbed41f4d8ff2f79a5498254b9087800b3dd969f-merged.mount: Deactivated successfully.
Nov 26 12:38:13 compute-0 podman[87994]: 2025-11-26 12:38:13.73707968 +0000 UTC m=+0.655651843 container remove 10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 12:38:13 compute-0 systemd[1]: libpod-conmon-10766f61f2498c88ad44a01d39e3f2435265a15c673c74608fac73f4009a5bc4.scope: Deactivated successfully.
Nov 26 12:38:13 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:13 compute-0 systemd[1]: Reloading.
Nov 26 12:38:13 compute-0 ceph-mon[74966]: Deploying daemon osd.0 on compute-0
Nov 26 12:38:13 compute-0 systemd-sysv-generator[88064]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:38:13 compute-0 systemd-rc-local-generator[88060]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:38:14 compute-0 systemd[1]: Reloading.
Nov 26 12:38:14 compute-0 systemd-rc-local-generator[88102]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:38:14 compute-0 systemd-sysv-generator[88106]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:38:14 compute-0 systemd[1]: Starting Ceph osd.0 for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 12:38:14 compute-0 podman[88156]: 2025-11-26 12:38:14.473794666 +0000 UTC m=+0.026586294 container create bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:14 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc484ce1a77669a8aedc043453118501af41931903eb548e966d95a679f3bcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc484ce1a77669a8aedc043453118501af41931903eb548e966d95a679f3bcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc484ce1a77669a8aedc043453118501af41931903eb548e966d95a679f3bcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc484ce1a77669a8aedc043453118501af41931903eb548e966d95a679f3bcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc484ce1a77669a8aedc043453118501af41931903eb548e966d95a679f3bcd/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:14 compute-0 podman[88156]: 2025-11-26 12:38:14.525502798 +0000 UTC m=+0.078294426 container init bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:14 compute-0 podman[88156]: 2025-11-26 12:38:14.530171531 +0000 UTC m=+0.082963159 container start bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:14 compute-0 podman[88156]: 2025-11-26 12:38:14.531304635 +0000 UTC m=+0.084096263 container attach bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 26 12:38:14 compute-0 podman[88156]: 2025-11-26 12:38:14.463036261 +0000 UTC m=+0.015827909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:14 compute-0 ceph-mon[74966]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:15 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate[88168]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 12:38:15 compute-0 bash[88156]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 12:38:15 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate[88168]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 12:38:15 compute-0 bash[88156]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 12:38:15 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate[88168]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 12:38:15 compute-0 bash[88156]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 12:38:15 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate[88168]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 12:38:15 compute-0 bash[88156]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 12:38:15 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate[88168]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 12:38:15 compute-0 bash[88156]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 12:38:15 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate[88168]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 12:38:15 compute-0 bash[88156]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 12:38:15 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate[88168]: --> ceph-volume raw activate successful for osd ID: 0
Nov 26 12:38:15 compute-0 bash[88156]: --> ceph-volume raw activate successful for osd ID: 0
Nov 26 12:38:15 compute-0 systemd[1]: libpod-bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46.scope: Deactivated successfully.
Nov 26 12:38:15 compute-0 conmon[88168]: conmon bb8f0d0586e1ce22a071 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46.scope/container/memory.events
Nov 26 12:38:15 compute-0 podman[88156]: 2025-11-26 12:38:15.347483197 +0000 UTC m=+0.900274835 container died bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-abc484ce1a77669a8aedc043453118501af41931903eb548e966d95a679f3bcd-merged.mount: Deactivated successfully.
Nov 26 12:38:15 compute-0 podman[88156]: 2025-11-26 12:38:15.377632961 +0000 UTC m=+0.930424589 container remove bb8f0d0586e1ce22a0714dd07f57f56957c2a354b5ba5ec890a7efff70662a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 12:38:15 compute-0 podman[88346]: 2025-11-26 12:38:15.51272131 +0000 UTC m=+0.026771323 container create 9981961b79970f3203da5890b61d540db16b3fc16ea1d2c76344e2daf1f706a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 12:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4814c27208e07552f0380e993872782cc0314310d0b35e7daf079b1bc64c999/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4814c27208e07552f0380e993872782cc0314310d0b35e7daf079b1bc64c999/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4814c27208e07552f0380e993872782cc0314310d0b35e7daf079b1bc64c999/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4814c27208e07552f0380e993872782cc0314310d0b35e7daf079b1bc64c999/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4814c27208e07552f0380e993872782cc0314310d0b35e7daf079b1bc64c999/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:15 compute-0 podman[88346]: 2025-11-26 12:38:15.5556547 +0000 UTC m=+0.069704723 container init 9981961b79970f3203da5890b61d540db16b3fc16ea1d2c76344e2daf1f706a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 12:38:15 compute-0 podman[88346]: 2025-11-26 12:38:15.561213217 +0000 UTC m=+0.075263229 container start 9981961b79970f3203da5890b61d540db16b3fc16ea1d2c76344e2daf1f706a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 12:38:15 compute-0 bash[88346]: 9981961b79970f3203da5890b61d540db16b3fc16ea1d2c76344e2daf1f706a9
Nov 26 12:38:15 compute-0 podman[88346]: 2025-11-26 12:38:15.501490501 +0000 UTC m=+0.015540534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:15 compute-0 systemd[1]: Started Ceph osd.0 for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 12:38:15 compute-0 sudo[87893]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:15 compute-0 ceph-osd[88362]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 12:38:15 compute-0 ceph-osd[88362]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 26 12:38:15 compute-0 ceph-osd[88362]: pidfile_write: ignore empty --pid-file
Nov 26 12:38:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:38:15 compute-0 ceph-osd[88362]: bdev(0x56032f683800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 12:38:15 compute-0 ceph-osd[88362]: bdev(0x56032f683800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 12:38:15 compute-0 ceph-osd[88362]: bdev(0x56032f683800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:15 compute-0 ceph-osd[88362]: bdev(0x56032f683800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:15 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 12:38:15 compute-0 ceph-osd[88362]: bdev(0x5603304bb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 12:38:15 compute-0 ceph-osd[88362]: bdev(0x5603304bb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 12:38:15 compute-0 ceph-osd[88362]: bdev(0x5603304bb800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:15 compute-0 ceph-osd[88362]: bdev(0x5603304bb800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:15 compute-0 ceph-osd[88362]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 26 12:38:15 compute-0 ceph-osd[88362]: bdev(0x5603304bb800 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 12:38:15 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:38:15 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 26 12:38:15 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 26 12:38:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:38:15 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:15 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 26 12:38:15 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 26 12:38:15 compute-0 sudo[88375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:15 compute-0 sudo[88375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:15 compute-0 sudo[88375]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:15 compute-0 sudo[88400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:15 compute-0 sudo[88400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:15 compute-0 sudo[88400]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:15 compute-0 sudo[88425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:15 compute-0 sudo[88425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:15 compute-0 sudo[88425]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:15 compute-0 sudo[88450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:38:15 compute-0 sudo[88450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:15 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:15 compute-0 ceph-osd[88362]: bdev(0x56032f683800 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 12:38:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:38:15 compute-0 podman[88511]: 2025-11-26 12:38:15.99661319 +0000 UTC m=+0.026452920 container create 798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:16 compute-0 systemd[1]: Started libpod-conmon-798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1.scope.
Nov 26 12:38:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:16 compute-0 podman[88511]: 2025-11-26 12:38:16.048875069 +0000 UTC m=+0.078714820 container init 798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 12:38:16 compute-0 podman[88511]: 2025-11-26 12:38:16.053371077 +0000 UTC m=+0.083210807 container start 798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 12:38:16 compute-0 podman[88511]: 2025-11-26 12:38:16.05528656 +0000 UTC m=+0.085126311 container attach 798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 12:38:16 compute-0 festive_clarke[88524]: 167 167
Nov 26 12:38:16 compute-0 systemd[1]: libpod-798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1.scope: Deactivated successfully.
Nov 26 12:38:16 compute-0 conmon[88524]: conmon 798c0b5121c8882ec8ec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1.scope/container/memory.events
Nov 26 12:38:16 compute-0 podman[88511]: 2025-11-26 12:38:16.057613333 +0000 UTC m=+0.087453062 container died 798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 12:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1598ea706c603887ec5623530f212330fa635c1a6aba781d7eaabf95532e9958-merged.mount: Deactivated successfully.
Nov 26 12:38:16 compute-0 podman[88511]: 2025-11-26 12:38:16.0741019 +0000 UTC m=+0.103941630 container remove 798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:16 compute-0 podman[88511]: 2025-11-26 12:38:15.98595288 +0000 UTC m=+0.015792620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:16 compute-0 systemd[1]: libpod-conmon-798c0b5121c8882ec8ecde568b189e5f0e86e3d0b152dcc3ea7ab7cff910a6a1.scope: Deactivated successfully.
Nov 26 12:38:16 compute-0 ceph-osd[88362]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 26 12:38:16 compute-0 ceph-osd[88362]: load: jerasure load: lrc 
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 12:38:16 compute-0 podman[88565]: 2025-11-26 12:38:16.252904777 +0000 UTC m=+0.026776362 container create 28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 26 12:38:16 compute-0 systemd[1]: Started libpod-conmon-28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a.scope.
Nov 26 12:38:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df436fe03170fa9941245e25b02eb7f52d93b0dc4d4e155327274b09324c898/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df436fe03170fa9941245e25b02eb7f52d93b0dc4d4e155327274b09324c898/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df436fe03170fa9941245e25b02eb7f52d93b0dc4d4e155327274b09324c898/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df436fe03170fa9941245e25b02eb7f52d93b0dc4d4e155327274b09324c898/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df436fe03170fa9941245e25b02eb7f52d93b0dc4d4e155327274b09324c898/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:16 compute-0 podman[88565]: 2025-11-26 12:38:16.310874237 +0000 UTC m=+0.084745821 container init 28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 12:38:16 compute-0 podman[88565]: 2025-11-26 12:38:16.317074467 +0000 UTC m=+0.090946051 container start 28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:16 compute-0 podman[88565]: 2025-11-26 12:38:16.31865571 +0000 UTC m=+0.092527293 container attach 28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:16 compute-0 podman[88565]: 2025-11-26 12:38:16.241969287 +0000 UTC m=+0.015840891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 26 12:38:16 compute-0 ceph-osd[88362]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluefs mount
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluefs mount shared_bdev_used = 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: RocksDB version: 7.9.2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Git sha 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: DB SUMMARY
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: DB Session ID:  OP18G8N8BK0JDZ3FFAWB
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: CURRENT file:  CURRENT
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                         Options.error_if_exists: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.create_if_missing: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                                     Options.env: 0x56033050dc70
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                                Options.info_log: 0x56032f70a8a0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                              Options.statistics: (nil)
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.use_fsync: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                              Options.db_log_dir: 
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.write_buffer_manager: 0x560330616460
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.unordered_write: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.row_cache: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                              Options.wal_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.two_write_queues: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.wal_compression: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.atomic_flush: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.max_background_jobs: 4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.max_background_compactions: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.max_subcompactions: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.max_open_files: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Compression algorithms supported:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kZSTD supported: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kXpressCompression supported: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kBZip2Compression supported: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kLZ4Compression supported: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kZlibCompression supported: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kSnappyCompression supported: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f7090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f7090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f70a240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f7090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1561b018-6fdf-4e5d-94af-e3c267a92376
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160696447686, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160696447976, "job": 1, "event": "recovery_finished"}
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: freelist init
Nov 26 12:38:16 compute-0 ceph-osd[88362]: freelist _read_cfg
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluefs umount
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 12:38:16 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:16 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:16 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 26 12:38:16 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:16 compute-0 ceph-mon[74966]: Deploying daemon osd.1 on compute-0
Nov 26 12:38:16 compute-0 ceph-mon[74966]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bdev(0x56033053d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluefs mount
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluefs mount shared_bdev_used = 4718592
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: RocksDB version: 7.9.2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Git sha 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: DB SUMMARY
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: DB Session ID:  OP18G8N8BK0JDZ3FFAWA
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: CURRENT file:  CURRENT
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                         Options.error_if_exists: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.create_if_missing: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                                     Options.env: 0x56032f85f8f0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                                Options.info_log: 0x5603305096c0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                              Options.statistics: (nil)
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.use_fsync: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                              Options.db_log_dir: 
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.write_buffer_manager: 0x5603306166e0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.unordered_write: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.row_cache: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                              Options.wal_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.two_write_queues: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.wal_compression: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.atomic_flush: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.max_background_jobs: 4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.max_background_compactions: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.max_subcompactions: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.max_open_files: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Compression algorithms supported:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kZSTD supported: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kXpressCompression supported: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kBZip2Compression supported: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kLZ4Compression supported: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kZlibCompression supported: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         kSnappyCompression supported: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f701060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f701060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f701060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f701060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f701060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f701060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56032f701060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560330509460)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f7090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560330509460)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f7090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560330509460)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56032f6f7090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1561b018-6fdf-4e5d-94af-e3c267a92376
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160696727818, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160696730517, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160696, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1561b018-6fdf-4e5d-94af-e3c267a92376", "db_session_id": "OP18G8N8BK0JDZ3FFAWA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160696731502, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160696, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1561b018-6fdf-4e5d-94af-e3c267a92376", "db_session_id": "OP18G8N8BK0JDZ3FFAWA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160696732350, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160696, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1561b018-6fdf-4e5d-94af-e3c267a92376", "db_session_id": "OP18G8N8BK0JDZ3FFAWA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160696732839, "job": 1, "event": "recovery_finished"}
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5603306ddc00
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: DB pointer 0x5603305ffa00
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 26 12:38:16 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 12:38:16 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f7090#2 capacity: 512.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,3.8743e-05%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f7090#2 capacity: 512.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,3.8743e-05%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f7090#2 capacity: 512.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,3.8743e-05%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 12:38:16 compute-0 ceph-osd[88362]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 26 12:38:16 compute-0 ceph-osd[88362]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 26 12:38:16 compute-0 ceph-osd[88362]: _get_class not permitted to load lua
Nov 26 12:38:16 compute-0 ceph-osd[88362]: _get_class not permitted to load sdk
Nov 26 12:38:16 compute-0 ceph-osd[88362]: _get_class not permitted to load test_remote_reads
Nov 26 12:38:16 compute-0 ceph-osd[88362]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 26 12:38:16 compute-0 ceph-osd[88362]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 26 12:38:16 compute-0 ceph-osd[88362]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 26 12:38:16 compute-0 ceph-osd[88362]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 26 12:38:16 compute-0 ceph-osd[88362]: osd.0 0 load_pgs
Nov 26 12:38:16 compute-0 ceph-osd[88362]: osd.0 0 load_pgs opened 0 pgs
Nov 26 12:38:16 compute-0 ceph-osd[88362]: osd.0 0 log_to_monitors true
Nov 26 12:38:16 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0[88358]: 2025-11-26T12:38:16.751+0000 7fe03c03f740 -1 osd.0 0 log_to_monitors true
Nov 26 12:38:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 26 12:38:16 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 26 12:38:16 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test[88578]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 26 12:38:16 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test[88578]:                             [--no-systemd] [--no-tmpfs]
Nov 26 12:38:16 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test[88578]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 26 12:38:16 compute-0 systemd[1]: libpod-28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a.scope: Deactivated successfully.
Nov 26 12:38:16 compute-0 podman[88565]: 2025-11-26 12:38:16.876866131 +0000 UTC m=+0.650737715 container died 28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5df436fe03170fa9941245e25b02eb7f52d93b0dc4d4e155327274b09324c898-merged.mount: Deactivated successfully.
Nov 26 12:38:16 compute-0 podman[88565]: 2025-11-26 12:38:16.90800696 +0000 UTC m=+0.681878544 container remove 28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate-test, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 12:38:16 compute-0 systemd[1]: libpod-conmon-28e5fffa5e95aa111baf674d5358bc2ead2950af47f54348f63fa0b427bd2d4a.scope: Deactivated successfully.
Nov 26 12:38:17 compute-0 systemd[1]: Reloading.
Nov 26 12:38:17 compute-0 systemd-rc-local-generator[89040]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:38:17 compute-0 systemd-sysv-generator[89043]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:38:17 compute-0 systemd[1]: Reloading.
Nov 26 12:38:17 compute-0 systemd-sysv-generator[89082]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:38:17 compute-0 systemd-rc-local-generator[89079]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:38:17 compute-0 systemd[1]: Starting Ceph osd.1 for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 12:38:17 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 26 12:38:17 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 12:38:17 compute-0 ceph-mon[74966]: from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 26 12:38:17 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 26 12:38:17 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 26 12:38:17 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 26 12:38:17 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 26 12:38:17 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 12:38:17 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 26 12:38:17 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 12:38:17 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:17 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 12:38:17 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:17 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 12:38:17 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:17 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 12:38:17 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 12:38:17 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 12:38:17 compute-0 podman[89137]: 2025-11-26 12:38:17.666892643 +0000 UTC m=+0.028179628 container create f6d158c6c3276fef02e1722d21ca127ccc1a47f8a177e675bb59766e32931aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721d01acc8a88be545f5cf2a7d41e361564830574e2dac0d477c545cd1ea377/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721d01acc8a88be545f5cf2a7d41e361564830574e2dac0d477c545cd1ea377/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721d01acc8a88be545f5cf2a7d41e361564830574e2dac0d477c545cd1ea377/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721d01acc8a88be545f5cf2a7d41e361564830574e2dac0d477c545cd1ea377/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721d01acc8a88be545f5cf2a7d41e361564830574e2dac0d477c545cd1ea377/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:17 compute-0 podman[89137]: 2025-11-26 12:38:17.710942947 +0000 UTC m=+0.072229951 container init f6d158c6c3276fef02e1722d21ca127ccc1a47f8a177e675bb59766e32931aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 12:38:17 compute-0 podman[89137]: 2025-11-26 12:38:17.715897981 +0000 UTC m=+0.077184966 container start f6d158c6c3276fef02e1722d21ca127ccc1a47f8a177e675bb59766e32931aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 12:38:17 compute-0 podman[89137]: 2025-11-26 12:38:17.717342514 +0000 UTC m=+0.078629499 container attach f6d158c6c3276fef02e1722d21ca127ccc1a47f8a177e675bb59766e32931aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 12:38:17 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 26 12:38:17 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 26 12:38:17 compute-0 podman[89137]: 2025-11-26 12:38:17.655524284 +0000 UTC m=+0.016811279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:17 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:18 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate[89149]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 12:38:18 compute-0 bash[89137]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 12:38:18 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate[89149]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 12:38:18 compute-0 bash[89137]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 12:38:18 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate[89149]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 12:38:18 compute-0 bash[89137]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 12:38:18 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate[89149]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 12:38:18 compute-0 bash[89137]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 12:38:18 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate[89149]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 12:38:18 compute-0 bash[89137]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 12:38:18 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate[89149]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 12:38:18 compute-0 bash[89137]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 12:38:18 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate[89149]: --> ceph-volume raw activate successful for osd ID: 1
Nov 26 12:38:18 compute-0 bash[89137]: --> ceph-volume raw activate successful for osd ID: 1
Nov 26 12:38:18 compute-0 systemd[1]: libpod-f6d158c6c3276fef02e1722d21ca127ccc1a47f8a177e675bb59766e32931aba.scope: Deactivated successfully.
Nov 26 12:38:18 compute-0 podman[89137]: 2025-11-26 12:38:18.519315448 +0000 UTC m=+0.880602434 container died f6d158c6c3276fef02e1722d21ca127ccc1a47f8a177e675bb59766e32931aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1721d01acc8a88be545f5cf2a7d41e361564830574e2dac0d477c545cd1ea377-merged.mount: Deactivated successfully.
Nov 26 12:38:18 compute-0 podman[89137]: 2025-11-26 12:38:18.552345002 +0000 UTC m=+0.913631986 container remove f6d158c6c3276fef02e1722d21ca127ccc1a47f8a177e675bb59766e32931aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 26 12:38:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 26 12:38:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 12:38:18 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 12:38:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 26 12:38:18 compute-0 ceph-osd[88362]: osd.0 0 done with init, starting boot process
Nov 26 12:38:18 compute-0 ceph-osd[88362]: osd.0 0 start_boot
Nov 26 12:38:18 compute-0 ceph-osd[88362]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 26 12:38:18 compute-0 ceph-osd[88362]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 26 12:38:18 compute-0 ceph-osd[88362]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 26 12:38:18 compute-0 ceph-osd[88362]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 26 12:38:18 compute-0 ceph-osd[88362]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 26 12:38:18 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 26 12:38:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 12:38:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 12:38:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 12:38:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:18 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 12:38:18 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 12:38:18 compute-0 ceph-mon[74966]: from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 26 12:38:18 compute-0 ceph-mon[74966]: osdmap e7: 3 total, 0 up, 3 in
Nov 26 12:38:18 compute-0 ceph-mon[74966]: from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 12:38:18 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:18 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:18 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:18 compute-0 ceph-mon[74966]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:18 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 12:38:18 compute-0 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1795865798; not ready for session (expect reconnect)
Nov 26 12:38:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 12:38:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:18 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 12:38:18 compute-0 podman[89311]: 2025-11-26 12:38:18.723191045 +0000 UTC m=+0.048375097 container create 7fe95a8b384c5c68314b5460611d9d4e1d6cc687c822707047def746a6bd8d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb013a56b0becbef7b3b36d69426bb0f46bbe5876c097b98e39268709ddf439f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb013a56b0becbef7b3b36d69426bb0f46bbe5876c097b98e39268709ddf439f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb013a56b0becbef7b3b36d69426bb0f46bbe5876c097b98e39268709ddf439f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb013a56b0becbef7b3b36d69426bb0f46bbe5876c097b98e39268709ddf439f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb013a56b0becbef7b3b36d69426bb0f46bbe5876c097b98e39268709ddf439f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:18 compute-0 podman[89311]: 2025-11-26 12:38:18.692159883 +0000 UTC m=+0.017343956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:18 compute-0 podman[89311]: 2025-11-26 12:38:18.828425029 +0000 UTC m=+0.153609102 container init 7fe95a8b384c5c68314b5460611d9d4e1d6cc687c822707047def746a6bd8d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 12:38:18 compute-0 podman[89311]: 2025-11-26 12:38:18.833386706 +0000 UTC m=+0.158570760 container start 7fe95a8b384c5c68314b5460611d9d4e1d6cc687c822707047def746a6bd8d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 26 12:38:18 compute-0 bash[89311]: 7fe95a8b384c5c68314b5460611d9d4e1d6cc687c822707047def746a6bd8d18
Nov 26 12:38:18 compute-0 systemd[1]: Started Ceph osd.1 for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 12:38:18 compute-0 ceph-osd[89328]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 12:38:18 compute-0 ceph-osd[89328]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 26 12:38:18 compute-0 ceph-osd[89328]: pidfile_write: ignore empty --pid-file
Nov 26 12:38:18 compute-0 ceph-osd[89328]: bdev(0x561fc2f8b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 12:38:18 compute-0 ceph-osd[89328]: bdev(0x561fc2f8b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 12:38:18 compute-0 ceph-osd[89328]: bdev(0x561fc2f8b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:18 compute-0 ceph-osd[89328]: bdev(0x561fc2f8b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:18 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 12:38:18 compute-0 ceph-osd[89328]: bdev(0x561fc3dc3800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 12:38:18 compute-0 ceph-osd[89328]: bdev(0x561fc3dc3800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 12:38:18 compute-0 ceph-osd[89328]: bdev(0x561fc3dc3800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:18 compute-0 sudo[88450]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:18 compute-0 ceph-osd[89328]: bdev(0x561fc3dc3800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:18 compute-0 ceph-osd[89328]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 26 12:38:18 compute-0 ceph-osd[89328]: bdev(0x561fc3dc3800 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 12:38:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:38:18 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:38:18 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 26 12:38:18 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 26 12:38:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:38:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:18 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 26 12:38:18 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 26 12:38:18 compute-0 sudo[89341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:18 compute-0 sudo[89341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:18 compute-0 sudo[89341]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:19 compute-0 sudo[89366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:19 compute-0 sudo[89366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:19 compute-0 sudo[89366]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:19 compute-0 sudo[89391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:19 compute-0 sudo[89391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:19 compute-0 sudo[89391]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:19 compute-0 sudo[89416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:38:19 compute-0 sudo[89416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc2f8b800 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 12:38:19 compute-0 podman[89475]: 2025-11-26 12:38:19.371451853 +0000 UTC m=+0.027940725 container create 978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 26 12:38:19 compute-0 ceph-osd[89328]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 26 12:38:19 compute-0 ceph-osd[89328]: load: jerasure load: lrc 
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 12:38:19 compute-0 systemd[1]: Started libpod-conmon-978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e.scope.
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 12:38:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:19 compute-0 podman[89475]: 2025-11-26 12:38:19.435379625 +0000 UTC m=+0.091868517 container init 978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 12:38:19 compute-0 podman[89475]: 2025-11-26 12:38:19.444952277 +0000 UTC m=+0.101441149 container start 978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 12:38:19 compute-0 jolly_kare[89497]: 167 167
Nov 26 12:38:19 compute-0 systemd[1]: libpod-978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e.scope: Deactivated successfully.
Nov 26 12:38:19 compute-0 podman[89475]: 2025-11-26 12:38:19.450786515 +0000 UTC m=+0.107275407 container attach 978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:19 compute-0 podman[89475]: 2025-11-26 12:38:19.451023854 +0000 UTC m=+0.107512736 container died 978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:19 compute-0 podman[89475]: 2025-11-26 12:38:19.361197622 +0000 UTC m=+0.017686513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c54553e1786bf140c8881262960a447e4d42049897bedb4a49a8062a4980d0e-merged.mount: Deactivated successfully.
Nov 26 12:38:19 compute-0 podman[89475]: 2025-11-26 12:38:19.479944883 +0000 UTC m=+0.136433755 container remove 978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 12:38:19 compute-0 systemd[1]: libpod-conmon-978bf3dd0f0d2243688334e7689661a257a92d403054524851e764f0d22ac53e.scope: Deactivated successfully.
Nov 26 12:38:19 compute-0 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1795865798; not ready for session (expect reconnect)
Nov 26 12:38:19 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 12:38:19 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:19 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 12:38:19 compute-0 ceph-mon[74966]: from='osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 12:38:19 compute-0 ceph-mon[74966]: osdmap e8: 3 total, 0 up, 3 in
Nov 26 12:38:19 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:19 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:19 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:19 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:19 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:19 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:19 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 26 12:38:19 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:19 compute-0 ceph-mon[74966]: Deploying daemon osd.2 on compute-0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 26 12:38:19 compute-0 ceph-osd[89328]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e44c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluefs mount
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluefs mount shared_bdev_used = 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: RocksDB version: 7.9.2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Git sha 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: DB SUMMARY
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: DB Session ID:  L4FCZBK85MEUFPLLH3BU
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: CURRENT file:  CURRENT
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                         Options.error_if_exists: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.create_if_missing: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                                     Options.env: 0x561fc3e15c70
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                                Options.info_log: 0x561fc30128a0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                              Options.statistics: (nil)
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.use_fsync: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                              Options.db_log_dir: 
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.write_buffer_manager: 0x561fc3f1e460
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.unordered_write: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.row_cache: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                              Options.wal_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.two_write_queues: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.wal_compression: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.atomic_flush: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.max_background_jobs: 4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.max_background_compactions: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.max_subcompactions: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.max_open_files: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Compression algorithms supported:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kZSTD supported: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kXpressCompression supported: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kBZip2Compression supported: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kLZ4Compression supported: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kZlibCompression supported: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kSnappyCompression supported: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc30122c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc30122c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc30122c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc30122c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc30122c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc30122c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc30122c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 93dfa10c-7ad9-4a11-b11e-e56de0349760
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160699692120, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160699692294, "job": 1, "event": "recovery_finished"}
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 26 12:38:19 compute-0 podman[89527]: 2025-11-26 12:38:19.692692094 +0000 UTC m=+0.040977701 container create e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 12:38:19 compute-0 ceph-osd[89328]: freelist init
Nov 26 12:38:19 compute-0 ceph-osd[89328]: freelist _read_cfg
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluefs umount
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 12:38:19 compute-0 systemd[1]: Started libpod-conmon-e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768.scope.
Nov 26 12:38:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6162960d2532a1676d6a549216960ccf415656970f46f9aa47b1555004990cee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6162960d2532a1676d6a549216960ccf415656970f46f9aa47b1555004990cee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6162960d2532a1676d6a549216960ccf415656970f46f9aa47b1555004990cee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6162960d2532a1676d6a549216960ccf415656970f46f9aa47b1555004990cee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6162960d2532a1676d6a549216960ccf415656970f46f9aa47b1555004990cee/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:19 compute-0 podman[89527]: 2025-11-26 12:38:19.756531249 +0000 UTC m=+0.104816866 container init e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:19 compute-0 podman[89527]: 2025-11-26 12:38:19.762040813 +0000 UTC m=+0.110326430 container start e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:19 compute-0 podman[89527]: 2025-11-26 12:38:19.767790942 +0000 UTC m=+0.116076560 container attach e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 12:38:19 compute-0 podman[89527]: 2025-11-26 12:38:19.675704183 +0000 UTC m=+0.023989819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:19 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bdev(0x561fc3e45400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluefs mount
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluefs mount shared_bdev_used = 4718592
Nov 26 12:38:19 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: RocksDB version: 7.9.2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Git sha 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: DB SUMMARY
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: DB Session ID:  L4FCZBK85MEUFPLLH3BV
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: CURRENT file:  CURRENT
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                         Options.error_if_exists: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.create_if_missing: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                                     Options.env: 0x561fc3fc6460
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                                Options.info_log: 0x561fc3012620
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                              Options.statistics: (nil)
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.use_fsync: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                              Options.db_log_dir: 
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.write_buffer_manager: 0x561fc3f1e460
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.unordered_write: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.row_cache: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                              Options.wal_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.two_write_queues: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.wal_compression: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.atomic_flush: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.max_background_jobs: 4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.max_background_compactions: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.max_subcompactions: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.max_open_files: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Compression algorithms supported:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kZSTD supported: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kXpressCompression supported: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kBZip2Compression supported: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kLZ4Compression supported: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kZlibCompression supported: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         kSnappyCompression supported: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561fc3012380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561fc2fff090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 93dfa10c-7ad9-4a11-b11e-e56de0349760
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160699994970, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 12:38:19 compute-0 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 12:38:20 compute-0 ceph-osd[89328]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160700037414, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160699, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93dfa10c-7ad9-4a11-b11e-e56de0349760", "db_session_id": "L4FCZBK85MEUFPLLH3BV", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:38:20 compute-0 ceph-osd[89328]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160700038882, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160700, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93dfa10c-7ad9-4a11-b11e-e56de0349760", "db_session_id": "L4FCZBK85MEUFPLLH3BV", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:38:20 compute-0 ceph-osd[89328]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160700043408, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160700, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93dfa10c-7ad9-4a11-b11e-e56de0349760", "db_session_id": "L4FCZBK85MEUFPLLH3BV", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:38:20 compute-0 ceph-osd[89328]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160700044084, "job": 1, "event": "recovery_finished"}
Nov 26 12:38:20 compute-0 ceph-osd[89328]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 26 12:38:20 compute-0 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561fc316c000
Nov 26 12:38:20 compute-0 ceph-osd[89328]: rocksdb: DB pointer 0x561fc3f07a00
Nov 26 12:38:20 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 12:38:20 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 26 12:38:20 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 26 12:38:20 compute-0 ceph-osd[89328]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 26 12:38:20 compute-0 ceph-osd[89328]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 26 12:38:20 compute-0 ceph-osd[89328]: _get_class not permitted to load lua
Nov 26 12:38:20 compute-0 ceph-osd[89328]: _get_class not permitted to load sdk
Nov 26 12:38:20 compute-0 ceph-osd[89328]: _get_class not permitted to load test_remote_reads
Nov 26 12:38:20 compute-0 ceph-osd[89328]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 26 12:38:20 compute-0 ceph-osd[89328]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 26 12:38:20 compute-0 ceph-osd[89328]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 26 12:38:20 compute-0 ceph-osd[89328]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 26 12:38:20 compute-0 ceph-osd[89328]: osd.1 0 load_pgs
Nov 26 12:38:20 compute-0 ceph-osd[89328]: osd.1 0 load_pgs opened 0 pgs
Nov 26 12:38:20 compute-0 ceph-osd[89328]: osd.1 0 log_to_monitors true
Nov 26 12:38:20 compute-0 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 12:38:20 compute-0 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff090#2 capacity: 512.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,3.8743e-05%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff090#2 capacity: 512.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,3.8743e-05%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff090#2 capacity: 512.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,3.8743e-05%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 12:38:20 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1[89324]: 2025-11-26T12:38:20.070+0000 7f9eb500d740 -1 osd.1 0 log_to_monitors true
Nov 26 12:38:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 26 12:38:20 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 26 12:38:20 compute-0 ceph-osd[88362]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 76.582 iops: 19604.878 elapsed_sec: 0.153
Nov 26 12:38:20 compute-0 ceph-osd[88362]: log_channel(cluster) log [WRN] : OSD bench result of 19604.877803 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 12:38:20 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0[88358]: 2025-11-26T12:38:20.288+0000 7fe037fbf640 -1 osd.0 0 waiting for initial osdmap
Nov 26 12:38:20 compute-0 ceph-osd[88362]: osd.0 0 waiting for initial osdmap
Nov 26 12:38:20 compute-0 ceph-osd[88362]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 26 12:38:20 compute-0 ceph-osd[88362]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 26 12:38:20 compute-0 ceph-osd[88362]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 26 12:38:20 compute-0 ceph-osd[88362]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Nov 26 12:38:20 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-0[88358]: 2025-11-26T12:38:20.301+0000 7fe0335e7640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 12:38:20 compute-0 ceph-osd[88362]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 12:38:20 compute-0 ceph-osd[88362]: osd.0 8 set_numa_affinity not setting numa affinity
Nov 26 12:38:20 compute-0 ceph-osd[88362]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 26 12:38:20 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test[89735]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 26 12:38:20 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test[89735]:                             [--no-systemd] [--no-tmpfs]
Nov 26 12:38:20 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test[89735]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 26 12:38:20 compute-0 podman[89527]: 2025-11-26 12:38:20.324166313 +0000 UTC m=+0.672451940 container died e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:20 compute-0 systemd[1]: libpod-e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768.scope: Deactivated successfully.
Nov 26 12:38:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6162960d2532a1676d6a549216960ccf415656970f46f9aa47b1555004990cee-merged.mount: Deactivated successfully.
Nov 26 12:38:20 compute-0 podman[89527]: 2025-11-26 12:38:20.35863639 +0000 UTC m=+0.706922007 container remove e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate-test, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:20 compute-0 systemd[1]: libpod-conmon-e346d03118efaa5b01ec9717d854b43b52bf90b1bd3c52988e6f31962ceea768.scope: Deactivated successfully.
Nov 26 12:38:20 compute-0 systemd[1]: Reloading.
Nov 26 12:38:20 compute-0 systemd-sysv-generator[90012]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:38:20 compute-0 systemd-rc-local-generator[90009]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:38:20 compute-0 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1795865798; not ready for session (expect reconnect)
Nov 26 12:38:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 12:38:20 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:20 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 12:38:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 26 12:38:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 12:38:20 compute-0 ceph-mon[74966]: purged_snaps scrub starts
Nov 26 12:38:20 compute-0 ceph-mon[74966]: purged_snaps scrub ok
Nov 26 12:38:20 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:20 compute-0 ceph-mon[74966]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 12:38:20 compute-0 ceph-mon[74966]: from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 26 12:38:20 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:20 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 26 12:38:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Nov 26 12:38:20 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798] boot
Nov 26 12:38:20 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Nov 26 12:38:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 26 12:38:20 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 12:38:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 26 12:38:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 12:38:20 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 12:38:20 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 12:38:20 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:20 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 12:38:20 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 12:38:20 compute-0 ceph-osd[88362]: osd.0 9 state: booting -> active
Nov 26 12:38:20 compute-0 systemd[1]: Reloading.
Nov 26 12:38:20 compute-0 systemd-sysv-generator[90047]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:38:20 compute-0 systemd-rc-local-generator[90044]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:38:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:38:20 compute-0 systemd[1]: Starting Ceph osd.2 for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 12:38:21 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 26 12:38:21 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 26 12:38:21 compute-0 podman[90099]: 2025-11-26 12:38:21.123591707 +0000 UTC m=+0.027424749 container create 75a2026209097baee10bdb84dd454a393aa8cdc3408d582a3a44c49528b2c5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7200fb2fbea5ecccb0a33f872e2aad9936bcf9333567edaf4729ca80d239e0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7200fb2fbea5ecccb0a33f872e2aad9936bcf9333567edaf4729ca80d239e0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7200fb2fbea5ecccb0a33f872e2aad9936bcf9333567edaf4729ca80d239e0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7200fb2fbea5ecccb0a33f872e2aad9936bcf9333567edaf4729ca80d239e0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7200fb2fbea5ecccb0a33f872e2aad9936bcf9333567edaf4729ca80d239e0a/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:21 compute-0 podman[90099]: 2025-11-26 12:38:21.16610389 +0000 UTC m=+0.069936932 container init 75a2026209097baee10bdb84dd454a393aa8cdc3408d582a3a44c49528b2c5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 12:38:21 compute-0 podman[90099]: 2025-11-26 12:38:21.171562849 +0000 UTC m=+0.075395892 container start 75a2026209097baee10bdb84dd454a393aa8cdc3408d582a3a44c49528b2c5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 12:38:21 compute-0 podman[90099]: 2025-11-26 12:38:21.172698207 +0000 UTC m=+0.076531249 container attach 75a2026209097baee10bdb84dd454a393aa8cdc3408d582a3a44c49528b2c5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:21 compute-0 podman[90099]: 2025-11-26 12:38:21.112971674 +0000 UTC m=+0.016804736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 26 12:38:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 12:38:21 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 12:38:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 26 12:38:21 compute-0 ceph-osd[89328]: osd.1 0 done with init, starting boot process
Nov 26 12:38:21 compute-0 ceph-osd[89328]: osd.1 0 start_boot
Nov 26 12:38:21 compute-0 ceph-osd[89328]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 26 12:38:21 compute-0 ceph-osd[89328]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 26 12:38:21 compute-0 ceph-osd[89328]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 26 12:38:21 compute-0 ceph-osd[89328]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 26 12:38:21 compute-0 ceph-osd[89328]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 26 12:38:21 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 26 12:38:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 12:38:21 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 12:38:21 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:21 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 12:38:21 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 12:38:21 compute-0 ceph-mon[74966]: OSD bench result of 19604.877803 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 12:38:21 compute-0 ceph-mon[74966]: from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 26 12:38:21 compute-0 ceph-mon[74966]: osd.0 [v2:192.168.122.100:6802/1795865798,v1:192.168.122.100:6803/1795865798] boot
Nov 26 12:38:21 compute-0 ceph-mon[74966]: osdmap e9: 3 total, 1 up, 3 in
Nov 26 12:38:21 compute-0 ceph-mon[74966]: from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 12:38:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 12:38:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:21 compute-0 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/980381060; not ready for session (expect reconnect)
Nov 26 12:38:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 12:38:21 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:21 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 12:38:21 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 26 12:38:21 compute-0 ceph-mgr[75236]: [devicehealth INFO root] creating mgr pool
Nov 26 12:38:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 26 12:38:21 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 26 12:38:21 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate[90112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 12:38:21 compute-0 bash[90099]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 12:38:21 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate[90112]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 12:38:21 compute-0 bash[90099]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 12:38:21 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate[90112]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 12:38:21 compute-0 bash[90099]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 12:38:21 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate[90112]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 12:38:21 compute-0 bash[90099]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 12:38:21 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate[90112]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 12:38:21 compute-0 bash[90099]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 12:38:21 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate[90112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 12:38:21 compute-0 bash[90099]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 12:38:21 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate[90112]: --> ceph-volume raw activate successful for osd ID: 2
Nov 26 12:38:21 compute-0 bash[90099]: --> ceph-volume raw activate successful for osd ID: 2
Nov 26 12:38:21 compute-0 systemd[1]: libpod-75a2026209097baee10bdb84dd454a393aa8cdc3408d582a3a44c49528b2c5dd.scope: Deactivated successfully.
Nov 26 12:38:22 compute-0 podman[90231]: 2025-11-26 12:38:22.012373083 +0000 UTC m=+0.016738710 container died 75a2026209097baee10bdb84dd454a393aa8cdc3408d582a3a44c49528b2c5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 12:38:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7200fb2fbea5ecccb0a33f872e2aad9936bcf9333567edaf4729ca80d239e0a-merged.mount: Deactivated successfully.
Nov 26 12:38:22 compute-0 podman[90231]: 2025-11-26 12:38:22.078795556 +0000 UTC m=+0.083161163 container remove 75a2026209097baee10bdb84dd454a393aa8cdc3408d582a3a44c49528b2c5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2-activate, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:22 compute-0 podman[90281]: 2025-11-26 12:38:22.236380699 +0000 UTC m=+0.042166451 container create fad0efe7fb69756136726f3de93d8285c0c8e63a4f5cbbb541e21a1d047a6c06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2accbd43f6caf537dd8fd5db6997d5544747513a5253a97148cf4129e5a10239/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2accbd43f6caf537dd8fd5db6997d5544747513a5253a97148cf4129e5a10239/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2accbd43f6caf537dd8fd5db6997d5544747513a5253a97148cf4129e5a10239/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2accbd43f6caf537dd8fd5db6997d5544747513a5253a97148cf4129e5a10239/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2accbd43f6caf537dd8fd5db6997d5544747513a5253a97148cf4129e5a10239/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:22 compute-0 podman[90281]: 2025-11-26 12:38:22.208337191 +0000 UTC m=+0.014122962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:22 compute-0 podman[90281]: 2025-11-26 12:38:22.348835055 +0000 UTC m=+0.154620806 container init fad0efe7fb69756136726f3de93d8285c0c8e63a4f5cbbb541e21a1d047a6c06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 12:38:22 compute-0 podman[90281]: 2025-11-26 12:38:22.353574211 +0000 UTC m=+0.159359962 container start fad0efe7fb69756136726f3de93d8285c0c8e63a4f5cbbb541e21a1d047a6c06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:22 compute-0 bash[90281]: fad0efe7fb69756136726f3de93d8285c0c8e63a4f5cbbb541e21a1d047a6c06
Nov 26 12:38:22 compute-0 systemd[1]: Started Ceph osd.2 for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 12:38:22 compute-0 sudo[89416]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:22 compute-0 ceph-osd[90297]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 12:38:22 compute-0 ceph-osd[90297]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 26 12:38:22 compute-0 ceph-osd[90297]: pidfile_write: ignore empty --pid-file
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bdev(0x5640ef93d800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 12:38:22 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bdev(0x5640ef93d800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bdev(0x5640ef93d800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bdev(0x5640ef93d800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bdev(0x5640f0775000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bdev(0x5640f0775000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bdev(0x5640f0775000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bdev(0x5640f0775000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bdev(0x5640f0775000 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 12:38:22 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:22 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:38:22 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:22 compute-0 sudo[90310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:22 compute-0 sudo[90310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:22 compute-0 sudo[90310]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:22 compute-0 sudo[90335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:22 compute-0 sudo[90335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:22 compute-0 sudo[90335]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:22 compute-0 sudo[90360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:22 compute-0 sudo[90360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:22 compute-0 sudo[90360]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:22 compute-0 sudo[90385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:38:22 compute-0 sudo[90385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:22 compute-0 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/980381060; not ready for session (expect reconnect)
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bdev(0x5640ef93d800 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 12:38:22 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 26 12:38:22 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 12:38:22 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 12:38:22 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:22 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 12:38:22 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 26 12:38:22 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 26 12:38:22 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Nov 26 12:38:22 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 26 12:38:22 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 26 12:38:22 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 26 12:38:22 compute-0 ceph-mon[74966]: from='osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 12:38:22 compute-0 ceph-mon[74966]: osdmap e10: 3 total, 1 up, 3 in
Nov 26 12:38:22 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:22 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:22 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:22 compute-0 ceph-mon[74966]: pgmap v24: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 26 12:38:22 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 26 12:38:22 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:22 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:22 compute-0 ceph-osd[88362]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 26 12:38:22 compute-0 ceph-osd[88362]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 26 12:38:22 compute-0 ceph-osd[88362]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 26 12:38:22 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 26 12:38:22 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 12:38:22 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:22 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 12:38:22 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 12:38:22 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 12:38:22 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:22 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 26 12:38:22 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 26 12:38:22 compute-0 podman[90444]: 2025-11-26 12:38:22.865968418 +0000 UTC m=+0.028786636 container create ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:22 compute-0 systemd[1]: Started libpod-conmon-ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc.scope.
Nov 26 12:38:22 compute-0 ceph-osd[90297]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 26 12:38:22 compute-0 ceph-osd[90297]: load: jerasure load: lrc 
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 12:38:22 compute-0 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 12:38:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:22 compute-0 podman[90444]: 2025-11-26 12:38:22.929941466 +0000 UTC m=+0.092759704 container init ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:22 compute-0 podman[90444]: 2025-11-26 12:38:22.934565274 +0000 UTC m=+0.097383492 container start ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 12:38:22 compute-0 clever_black[90459]: 167 167
Nov 26 12:38:22 compute-0 podman[90444]: 2025-11-26 12:38:22.93743296 +0000 UTC m=+0.100251177 container attach ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:22 compute-0 systemd[1]: libpod-ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc.scope: Deactivated successfully.
Nov 26 12:38:22 compute-0 podman[90444]: 2025-11-26 12:38:22.938929241 +0000 UTC m=+0.101747458 container died ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:22 compute-0 podman[90444]: 2025-11-26 12:38:22.854023909 +0000 UTC m=+0.016842147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-67311d0ef53154ce01a3063bf6181de6d1bf40a966f0c6d0a6bdb395d83fddc3-merged.mount: Deactivated successfully.
Nov 26 12:38:22 compute-0 podman[90444]: 2025-11-26 12:38:22.96338876 +0000 UTC m=+0.126206976 container remove ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:22 compute-0 systemd[1]: libpod-conmon-ce37fa9acefe881033484d2040591e5384a5648b505ba8a67b14ff8e9dce09cc.scope: Deactivated successfully.
Nov 26 12:38:23 compute-0 podman[90485]: 2025-11-26 12:38:23.084966175 +0000 UTC m=+0.034296580 container create 21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 12:38:23 compute-0 systemd[1]: Started libpod-conmon-21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e.scope.
Nov 26 12:38:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e469978bccff69efe54f4463b4b900209419e8689cf89017551f9148c27b3479/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e469978bccff69efe54f4463b4b900209419e8689cf89017551f9148c27b3479/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e469978bccff69efe54f4463b4b900209419e8689cf89017551f9148c27b3479/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e469978bccff69efe54f4463b4b900209419e8689cf89017551f9148c27b3479/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:23 compute-0 podman[90485]: 2025-11-26 12:38:23.136255726 +0000 UTC m=+0.085586131 container init 21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:23 compute-0 podman[90485]: 2025-11-26 12:38:23.141523582 +0000 UTC m=+0.090853988 container start 21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:23 compute-0 podman[90485]: 2025-11-26 12:38:23.142610629 +0000 UTC m=+0.091941034 container attach 21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:23 compute-0 podman[90485]: 2025-11-26 12:38:23.072399259 +0000 UTC m=+0.021729684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 12:38:23 compute-0 ceph-osd[89328]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 97.496 iops: 24958.887 elapsed_sec: 0.120
Nov 26 12:38:23 compute-0 ceph-osd[89328]: log_channel(cluster) log [WRN] : OSD bench result of 24958.887305 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 12:38:23 compute-0 ceph-osd[89328]: osd.1 0 waiting for initial osdmap
Nov 26 12:38:23 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1[89324]: 2025-11-26T12:38:23.371+0000 7f9eb0f8d640 -1 osd.1 0 waiting for initial osdmap
Nov 26 12:38:23 compute-0 ceph-osd[89328]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 26 12:38:23 compute-0 ceph-osd[89328]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 26 12:38:23 compute-0 ceph-osd[89328]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 26 12:38:23 compute-0 ceph-osd[89328]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Nov 26 12:38:23 compute-0 ceph-osd[89328]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 12:38:23 compute-0 ceph-osd[89328]: osd.1 11 set_numa_affinity not setting numa affinity
Nov 26 12:38:23 compute-0 ceph-osd[89328]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 26 12:38:23 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-1[89324]: 2025-11-26T12:38:23.394+0000 7f9eac5b5640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 26 12:38:23 compute-0 ceph-osd[90297]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0775c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluefs mount
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluefs mount shared_bdev_used = 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: RocksDB version: 7.9.2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Git sha 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: DB SUMMARY
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: DB Session ID:  ZFP68MW27DJPUF7WJ9PW
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: CURRENT file:  CURRENT
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                         Options.error_if_exists: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.create_if_missing: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                                     Options.env: 0x5640f07c7c70
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                                Options.info_log: 0x5640ef9c4800
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                              Options.statistics: (nil)
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.use_fsync: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                              Options.db_log_dir: 
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.write_buffer_manager: 0x5640f08d2460
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.unordered_write: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.row_cache: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                              Options.wal_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.two_write_queues: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.wal_compression: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.atomic_flush: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.max_background_jobs: 4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.max_background_compactions: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.max_subcompactions: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.max_open_files: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Compression algorithms supported:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kZSTD supported: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kXpressCompression supported: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kBZip2Compression supported: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kLZ4Compression supported: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kZlibCompression supported: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kSnappyCompression supported: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b1090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b1090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9c4200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b1090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d949341c-8934-42e1-848d-1fe9b1f3749e
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160703476890, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160703477035, "job": 1, "event": "recovery_finished"}
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: freelist init
Nov 26 12:38:23 compute-0 ceph-osd[90297]: freelist _read_cfg
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluefs umount
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 12:38:23 compute-0 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/980381060; not ready for session (expect reconnect)
Nov 26 12:38:23 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 12:38:23 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:23 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 12:38:23 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 26 12:38:23 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 26 12:38:23 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Nov 26 12:38:23 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060] boot
Nov 26 12:38:23 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Nov 26 12:38:23 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 12:38:23 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:23 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 12:38:23 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:23 compute-0 ceph-mon[74966]: purged_snaps scrub starts
Nov 26 12:38:23 compute-0 ceph-mon[74966]: purged_snaps scrub ok
Nov 26 12:38:23 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:23 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 26 12:38:23 compute-0 ceph-mon[74966]: osdmap e11: 3 total, 1 up, 3 in
Nov 26 12:38:23 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:23 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:23 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 26 12:38:23 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:23 compute-0 ceph-osd[89328]: osd.1 12 state: booting -> active
Nov 26 12:38:23 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[11,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:23 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bdev(0x5640f0958400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluefs mount
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluefs mount shared_bdev_used = 4718592
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: RocksDB version: 7.9.2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Git sha 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: DB SUMMARY
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: DB Session ID:  ZFP68MW27DJPUF7WJ9PX
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: CURRENT file:  CURRENT
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                         Options.error_if_exists: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.create_if_missing: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                                     Options.env: 0x5640f0978310
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                                Options.info_log: 0x5640efc8afc0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                              Options.statistics: (nil)
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.use_fsync: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                              Options.db_log_dir: 
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.write_buffer_manager: 0x5640f08d26e0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.unordered_write: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.row_cache: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                              Options.wal_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.two_write_queues: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.wal_compression: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.atomic_flush: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.max_background_jobs: 4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.max_background_compactions: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.max_subcompactions: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.max_open_files: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Compression algorithms supported:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kZSTD supported: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kXpressCompression supported: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kBZip2Compression supported: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kLZ4Compression supported: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kZlibCompression supported: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kLZ4HCCompression supported: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         kSnappyCompression supported: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9baf80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9baf80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9baf80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9baf80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9baf80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9baf80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640ef9baf80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640f07c3c20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b1090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640f07c3c20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b1090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:           Options.merge_operator: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640f07c3c20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640ef9b1090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.compression: LZ4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.num_levels: 7
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.bloom_locality: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                               Options.ttl: 2592000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                       Options.enable_blob_files: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                           Options.min_blob_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d949341c-8934-42e1-848d-1fe9b1f3749e
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160703774469, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160703776843, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160703, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d949341c-8934-42e1-848d-1fe9b1f3749e", "db_session_id": "ZFP68MW27DJPUF7WJ9PX", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160703777826, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160703, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d949341c-8934-42e1-848d-1fe9b1f3749e", "db_session_id": "ZFP68MW27DJPUF7WJ9PX", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160703778579, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160703, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d949341c-8934-42e1-848d-1fe9b1f3749e", "db_session_id": "ZFP68MW27DJPUF7WJ9PX", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764160703779052, "job": 1, "event": "recovery_finished"}
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5640efb1fc00
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: DB pointer 0x5640f08bba00
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 26 12:38:23 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 12:38:23 compute-0 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b1090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b1090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b1090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 12:38:23 compute-0 ceph-osd[90297]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 26 12:38:23 compute-0 ceph-osd[90297]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 26 12:38:23 compute-0 ceph-osd[90297]: _get_class not permitted to load lua
Nov 26 12:38:23 compute-0 ceph-osd[90297]: _get_class not permitted to load sdk
Nov 26 12:38:23 compute-0 ceph-osd[90297]: _get_class not permitted to load test_remote_reads
Nov 26 12:38:23 compute-0 ceph-osd[90297]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 26 12:38:23 compute-0 ceph-osd[90297]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 26 12:38:23 compute-0 ceph-osd[90297]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 26 12:38:23 compute-0 ceph-osd[90297]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 26 12:38:23 compute-0 ceph-osd[90297]: osd.2 0 load_pgs
Nov 26 12:38:23 compute-0 ceph-osd[90297]: osd.2 0 load_pgs opened 0 pgs
Nov 26 12:38:23 compute-0 ceph-osd[90297]: osd.2 0 log_to_monitors true
Nov 26 12:38:23 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2[90293]: 2025-11-26T12:38:23.791+0000 7f7871f80740 -1 osd.2 0 log_to_monitors true
Nov 26 12:38:23 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 26 12:38:23 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 26 12:38:23 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v27: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 26 12:38:23 compute-0 goofy_swanson[90499]: {
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:         "osd_id": 1,
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:         "type": "bluestore"
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:     },
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:         "osd_id": 2,
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:         "type": "bluestore"
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:     },
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:         "osd_id": 0,
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:         "type": "bluestore"
Nov 26 12:38:23 compute-0 goofy_swanson[90499]:     }
Nov 26 12:38:23 compute-0 goofy_swanson[90499]: }
Nov 26 12:38:23 compute-0 systemd[1]: libpod-21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e.scope: Deactivated successfully.
Nov 26 12:38:23 compute-0 podman[90948]: 2025-11-26 12:38:23.959711457 +0000 UTC m=+0.020069884 container died 21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:38:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e469978bccff69efe54f4463b4b900209419e8689cf89017551f9148c27b3479-merged.mount: Deactivated successfully.
Nov 26 12:38:23 compute-0 podman[90948]: 2025-11-26 12:38:23.990455434 +0000 UTC m=+0.050813852 container remove 21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:23 compute-0 systemd[1]: libpod-conmon-21d3d07b2bfc45bb1bff278da7d8ee761c63f06af397ee02bf2774f1c9e9117e.scope: Deactivated successfully.
Nov 26 12:38:24 compute-0 sudo[90385]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:24 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:38:24 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:24 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:38:24 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:24 compute-0 sudo[90960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:24 compute-0 sudo[90960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:24 compute-0 sudo[90960]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:24 compute-0 sudo[90985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:38:24 compute-0 sudo[90985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:24 compute-0 sudo[90985]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:24 compute-0 sudo[91010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:24 compute-0 sudo[91010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:24 compute-0 sudo[91010]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:24 compute-0 sudo[91035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:24 compute-0 sudo[91035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:24 compute-0 sudo[91035]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:24 compute-0 sudo[91060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:24 compute-0 sudo[91060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:24 compute-0 sudo[91060]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:24 compute-0 sudo[91085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 12:38:24 compute-0 sudo[91085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:24 compute-0 podman[91167]: 2025-11-26 12:38:24.586790709 +0000 UTC m=+0.036596111 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:24 compute-0 podman[91167]: 2025-11-26 12:38:24.664950658 +0000 UTC m=+0.114756060 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:24 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 26 12:38:24 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 26 12:38:24 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Nov 26 12:38:24 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Nov 26 12:38:24 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 12:38:24 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:24 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 12:38:24 compute-0 ceph-mon[74966]: OSD bench result of 24958.887305 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 12:38:24 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 26 12:38:24 compute-0 ceph-mon[74966]: osd.1 [v2:192.168.122.100:6806/980381060,v1:192.168.122.100:6807/980381060] boot
Nov 26 12:38:24 compute-0 ceph-mon[74966]: osdmap e12: 3 total, 2 up, 3 in
Nov 26 12:38:24 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 12:38:24 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:24 compute-0 ceph-mon[74966]: from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 26 12:38:24 compute-0 ceph-mon[74966]: pgmap v27: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 26 12:38:24 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:24 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:24 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[11,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:24 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 26 12:38:24 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 12:38:24 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 26 12:38:24 compute-0 ceph-mgr[75236]: [devicehealth INFO root] creating main.db for devicehealth
Nov 26 12:38:24 compute-0 ceph-mgr[75236]: [devicehealth INFO root] Check health
Nov 26 12:38:24 compute-0 ceph-mgr[75236]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 26 12:38:24 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 26 12:38:24 compute-0 sudo[91218]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Nov 26 12:38:24 compute-0 sudo[91218]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 26 12:38:24 compute-0 sudo[91218]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Nov 26 12:38:24 compute-0 sudo[91218]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:24 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 26 12:38:24 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 26 12:38:24 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 12:38:24 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 26 12:38:24 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 26 12:38:24 compute-0 sudo[91085]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:24 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:38:24 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:24 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:38:24 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:25 compute-0 sudo[91280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:25 compute-0 sudo[91280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:25 compute-0 sudo[91280]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:25 compute-0 sudo[91305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:25 compute-0 sudo[91305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:25 compute-0 sudo[91305]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:25 compute-0 sudo[91330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:25 compute-0 sudo[91330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:25 compute-0 sudo[91330]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:25 compute-0 sudo[91355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- inventory --format=json-pretty --filter-for-batch
Nov 26 12:38:25 compute-0 sudo[91355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:25 compute-0 podman[91411]: 2025-11-26 12:38:25.362946927 +0000 UTC m=+0.028206178 container create be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 12:38:25 compute-0 systemd[1]: Started libpod-conmon-be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3.scope.
Nov 26 12:38:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:25 compute-0 podman[91411]: 2025-11-26 12:38:25.406565945 +0000 UTC m=+0.071825206 container init be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 12:38:25 compute-0 podman[91411]: 2025-11-26 12:38:25.411476617 +0000 UTC m=+0.076735867 container start be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:25 compute-0 podman[91411]: 2025-11-26 12:38:25.412702916 +0000 UTC m=+0.077962167 container attach be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 12:38:25 compute-0 jovial_chebyshev[91424]: 167 167
Nov 26 12:38:25 compute-0 systemd[1]: libpod-be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3.scope: Deactivated successfully.
Nov 26 12:38:25 compute-0 podman[91411]: 2025-11-26 12:38:25.415522481 +0000 UTC m=+0.080781732 container died be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc6ed5cf83c0f8fcfe4cefbb82d75ef9db79bf723d474c46634aebbb4651b2cb-merged.mount: Deactivated successfully.
Nov 26 12:38:25 compute-0 podman[91411]: 2025-11-26 12:38:25.434505077 +0000 UTC m=+0.099764328 container remove be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 12:38:25 compute-0 podman[91411]: 2025-11-26 12:38:25.351090113 +0000 UTC m=+0.016349355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:25 compute-0 systemd[1]: libpod-conmon-be794cf3bec1086b1d28783a41e0846a4fdbe6ffa1cb31d1f1479dc0a83f2fd3.scope: Deactivated successfully.
Nov 26 12:38:25 compute-0 podman[91446]: 2025-11-26 12:38:25.548429774 +0000 UTC m=+0.031484339 container create 335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 12:38:25 compute-0 systemd[1]: Started libpod-conmon-335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde.scope.
Nov 26 12:38:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b397b4d563d8868fb7d42d1210a51eb2efc3d709a32b9625765181fc5d905785/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b397b4d563d8868fb7d42d1210a51eb2efc3d709a32b9625765181fc5d905785/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b397b4d563d8868fb7d42d1210a51eb2efc3d709a32b9625765181fc5d905785/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b397b4d563d8868fb7d42d1210a51eb2efc3d709a32b9625765181fc5d905785/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:25 compute-0 podman[91446]: 2025-11-26 12:38:25.596826851 +0000 UTC m=+0.079881427 container init 335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:38:25 compute-0 podman[91446]: 2025-11-26 12:38:25.602450412 +0000 UTC m=+0.085504978 container start 335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 12:38:25 compute-0 podman[91446]: 2025-11-26 12:38:25.603789365 +0000 UTC m=+0.086843931 container attach 335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 12:38:25 compute-0 podman[91446]: 2025-11-26 12:38:25.534478397 +0000 UTC m=+0.017532974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 26 12:38:25 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 12:38:25 compute-0 ceph-osd[90297]: osd.2 0 done with init, starting boot process
Nov 26 12:38:25 compute-0 ceph-osd[90297]: osd.2 0 start_boot
Nov 26 12:38:25 compute-0 ceph-osd[90297]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 26 12:38:25 compute-0 ceph-osd[90297]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 26 12:38:25 compute-0 ceph-osd[90297]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 26 12:38:25 compute-0 ceph-osd[90297]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 26 12:38:25 compute-0 ceph-osd[90297]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 26 12:38:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Nov 26 12:38:25 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Nov 26 12:38:25 compute-0 ceph-mon[74966]: from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 26 12:38:25 compute-0 ceph-mon[74966]: osdmap e13: 3 total, 2 up, 3 in
Nov 26 12:38:25 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:25 compute-0 ceph-mon[74966]: from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 12:38:25 compute-0 ceph-mon[74966]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 26 12:38:25 compute-0 ceph-mon[74966]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 26 12:38:25 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 12:38:25 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:25 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 12:38:25 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:25 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 12:38:25 compute-0 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4170874453; not ready for session (expect reconnect)
Nov 26 12:38:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 12:38:25 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:25 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 12:38:25 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v30: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 26 12:38:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:38:26 compute-0 jolly_pike[91459]: [
Nov 26 12:38:26 compute-0 jolly_pike[91459]:     {
Nov 26 12:38:26 compute-0 jolly_pike[91459]:         "available": false,
Nov 26 12:38:26 compute-0 jolly_pike[91459]:         "ceph_device": false,
Nov 26 12:38:26 compute-0 jolly_pike[91459]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:         "lsm_data": {},
Nov 26 12:38:26 compute-0 jolly_pike[91459]:         "lvs": [],
Nov 26 12:38:26 compute-0 jolly_pike[91459]:         "path": "/dev/sr0",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:         "rejected_reasons": [
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "Has a FileSystem",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "Insufficient space (<5GB)"
Nov 26 12:38:26 compute-0 jolly_pike[91459]:         ],
Nov 26 12:38:26 compute-0 jolly_pike[91459]:         "sys_api": {
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "actuators": null,
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "device_nodes": "sr0",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "devname": "sr0",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "human_readable_size": "474.00 KB",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "id_bus": "ata",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "model": "QEMU DVD-ROM",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "nr_requests": "64",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "parent": "/dev/sr0",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "partitions": {},
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "path": "/dev/sr0",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "removable": "1",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "rev": "2.5+",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "ro": "0",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "rotational": "1",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "sas_address": "",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "sas_device_handle": "",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "scheduler_mode": "mq-deadline",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "sectors": 0,
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "sectorsize": "2048",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "size": 485376.0,
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "support_discard": "2048",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "type": "disk",
Nov 26 12:38:26 compute-0 jolly_pike[91459]:             "vendor": "QEMU"
Nov 26 12:38:26 compute-0 jolly_pike[91459]:         }
Nov 26 12:38:26 compute-0 jolly_pike[91459]:     }
Nov 26 12:38:26 compute-0 jolly_pike[91459]: ]
Nov 26 12:38:26 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.whkbdn(active, since 50s)
Nov 26 12:38:26 compute-0 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4170874453; not ready for session (expect reconnect)
Nov 26 12:38:26 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 12:38:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 12:38:26 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:26 compute-0 ceph-mon[74966]: from='osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 12:38:26 compute-0 ceph-mon[74966]: osdmap e14: 3 total, 2 up, 3 in
Nov 26 12:38:26 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:26 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:26 compute-0 ceph-mon[74966]: pgmap v30: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 26 12:38:26 compute-0 systemd[1]: libpod-335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde.scope: Deactivated successfully.
Nov 26 12:38:26 compute-0 systemd[1]: libpod-335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde.scope: Consumed 1.105s CPU time.
Nov 26 12:38:26 compute-0 conmon[91459]: conmon 335c28c3ce37bceca369 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde.scope/container/memory.events
Nov 26 12:38:26 compute-0 podman[91446]: 2025-11-26 12:38:26.697820678 +0000 UTC m=+1.180875254 container died 335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 12:38:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b397b4d563d8868fb7d42d1210a51eb2efc3d709a32b9625765181fc5d905785-merged.mount: Deactivated successfully.
Nov 26 12:38:26 compute-0 podman[91446]: 2025-11-26 12:38:26.738445901 +0000 UTC m=+1.221500477 container remove 335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:26 compute-0 systemd[1]: libpod-conmon-335c28c3ce37bceca369b72f449ec4adf9959be24d5ad19117dc92f33cdddbde.scope: Deactivated successfully.
Nov 26 12:38:26 compute-0 sudo[91355]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:38:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:38:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 26 12:38:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 26 12:38:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 26 12:38:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 26 12:38:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 26 12:38:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 26 12:38:26 compute-0 ceph-mgr[75236]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43934k
Nov 26 12:38:26 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43934k
Nov 26 12:38:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 26 12:38:26 compute-0 ceph-mgr[75236]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44988689: error parsing value: Value '44988689' is below minimum 939524096
Nov 26 12:38:26 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44988689: error parsing value: Value '44988689' is below minimum 939524096
Nov 26 12:38:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:38:26 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:38:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:38:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:38:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:26 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 5fb42466-2929-4349-b9f1-e136a8fb6ead does not exist
Nov 26 12:38:26 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 1b21a0fb-440b-4d28-844a-7a9083e2eb9e does not exist
Nov 26 12:38:26 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev be3f3deb-1d51-4596-a12b-5a8135e5c86d does not exist
Nov 26 12:38:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:38:26 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:38:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:38:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:38:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:38:26 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:26 compute-0 sudo[93297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:26 compute-0 sudo[93297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:26 compute-0 sudo[93297]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:26 compute-0 sudo[93322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:26 compute-0 sudo[93322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:26 compute-0 sudo[93322]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:26 compute-0 sudo[93347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:26 compute-0 sudo[93347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:26 compute-0 sudo[93347]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:26 compute-0 sudo[93372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:38:26 compute-0 sudo[93372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:27 compute-0 podman[93425]: 2025-11-26 12:38:27.225937482 +0000 UTC m=+0.030535424 container create d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 12:38:27 compute-0 systemd[1]: Started libpod-conmon-d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f.scope.
Nov 26 12:38:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:27 compute-0 podman[93425]: 2025-11-26 12:38:27.283297909 +0000 UTC m=+0.087895870 container init d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:27 compute-0 podman[93425]: 2025-11-26 12:38:27.289257214 +0000 UTC m=+0.093855156 container start d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 12:38:27 compute-0 stupefied_wozniak[93438]: 167 167
Nov 26 12:38:27 compute-0 systemd[1]: libpod-d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f.scope: Deactivated successfully.
Nov 26 12:38:27 compute-0 podman[93425]: 2025-11-26 12:38:27.295122121 +0000 UTC m=+0.099720073 container attach d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 12:38:27 compute-0 podman[93425]: 2025-11-26 12:38:27.295294628 +0000 UTC m=+0.099892569 container died d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 12:38:27 compute-0 podman[93425]: 2025-11-26 12:38:27.215934357 +0000 UTC m=+0.020532318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3610f11a5a79c4a90e871ffe5bd7654395e732ff8e5d853f09ca320713581b9e-merged.mount: Deactivated successfully.
Nov 26 12:38:27 compute-0 podman[93425]: 2025-11-26 12:38:27.319580235 +0000 UTC m=+0.124178178 container remove d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 12:38:27 compute-0 systemd[1]: libpod-conmon-d7fc4f39bcb6732ac9132831ecf020278da37323ef7f267dc6c7df4e2ad5ce6f.scope: Deactivated successfully.
Nov 26 12:38:27 compute-0 ceph-osd[90297]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 92.999 iops: 23807.863 elapsed_sec: 0.126
Nov 26 12:38:27 compute-0 ceph-osd[90297]: log_channel(cluster) log [WRN] : OSD bench result of 23807.862739 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 12:38:27 compute-0 ceph-osd[90297]: osd.2 0 waiting for initial osdmap
Nov 26 12:38:27 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2[90293]: 2025-11-26T12:38:27.416+0000 7f786df00640 -1 osd.2 0 waiting for initial osdmap
Nov 26 12:38:27 compute-0 ceph-osd[90297]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 26 12:38:27 compute-0 ceph-osd[90297]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 26 12:38:27 compute-0 ceph-osd[90297]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 26 12:38:27 compute-0 ceph-osd[90297]: osd.2 14 check_osdmap_features require_osd_release unknown -> reef
Nov 26 12:38:27 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-osd-2[90293]: 2025-11-26T12:38:27.430+0000 7f7869528640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 12:38:27 compute-0 ceph-osd[90297]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 12:38:27 compute-0 ceph-osd[90297]: osd.2 14 set_numa_affinity not setting numa affinity
Nov 26 12:38:27 compute-0 podman[93460]: 2025-11-26 12:38:27.433537625 +0000 UTC m=+0.030606258 container create 7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williams, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:27 compute-0 ceph-osd[90297]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 26 12:38:27 compute-0 systemd[1]: Started libpod-conmon-7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d.scope.
Nov 26 12:38:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26197d58d3c0dd76a98d701fa89fc2549cb4c9602a5da4b2950120bf048d18fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26197d58d3c0dd76a98d701fa89fc2549cb4c9602a5da4b2950120bf048d18fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26197d58d3c0dd76a98d701fa89fc2549cb4c9602a5da4b2950120bf048d18fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26197d58d3c0dd76a98d701fa89fc2549cb4c9602a5da4b2950120bf048d18fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26197d58d3c0dd76a98d701fa89fc2549cb4c9602a5da4b2950120bf048d18fd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:27 compute-0 podman[93460]: 2025-11-26 12:38:27.48625724 +0000 UTC m=+0.083325884 container init 7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williams, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 12:38:27 compute-0 podman[93460]: 2025-11-26 12:38:27.492973799 +0000 UTC m=+0.090042443 container start 7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williams, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:27 compute-0 podman[93460]: 2025-11-26 12:38:27.496220601 +0000 UTC m=+0.093289236 container attach 7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 12:38:27 compute-0 podman[93460]: 2025-11-26 12:38:27.421892662 +0000 UTC m=+0.018961317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:27 compute-0 ceph-mgr[75236]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4170874453; not ready for session (expect reconnect)
Nov 26 12:38:27 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 12:38:27 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:27 compute-0 ceph-mgr[75236]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 12:38:27 compute-0 ceph-mon[74966]: purged_snaps scrub starts
Nov 26 12:38:27 compute-0 ceph-mon[74966]: purged_snaps scrub ok
Nov 26 12:38:27 compute-0 ceph-mon[74966]: mgrmap e9: compute-0.whkbdn(active, since 50s)
Nov 26 12:38:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 26 12:38:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 26 12:38:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 26 12:38:27 compute-0 ceph-mon[74966]: Adjusting osd_memory_target on compute-0 to 43934k
Nov 26 12:38:27 compute-0 ceph-mon[74966]: Unable to set osd_memory_target on compute-0 to 44988689: error parsing value: Value '44988689' is below minimum 939524096
Nov 26 12:38:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:38:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:38:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:38:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:27 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 26 12:38:27 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Nov 26 12:38:27 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453] boot
Nov 26 12:38:27 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Nov 26 12:38:27 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 12:38:27 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:27 compute-0 ceph-osd[90297]: osd.2 15 state: booting -> active
Nov 26 12:38:27 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 26 12:38:28 compute-0 elated_williams[93475]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:38:28 compute-0 elated_williams[93475]: --> relative data size: 1.0
Nov 26 12:38:28 compute-0 elated_williams[93475]: --> All data devices are unavailable
Nov 26 12:38:28 compute-0 systemd[1]: libpod-7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d.scope: Deactivated successfully.
Nov 26 12:38:28 compute-0 podman[93460]: 2025-11-26 12:38:28.308120598 +0000 UTC m=+0.905189242 container died 7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 12:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-26197d58d3c0dd76a98d701fa89fc2549cb4c9602a5da4b2950120bf048d18fd-merged.mount: Deactivated successfully.
Nov 26 12:38:28 compute-0 podman[93460]: 2025-11-26 12:38:28.337730771 +0000 UTC m=+0.934799405 container remove 7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williams, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 12:38:28 compute-0 systemd[1]: libpod-conmon-7cbacc41df212f8721f22f77ba34a0d2fad1185c5e59f6e52402a88496c42b8d.scope: Deactivated successfully.
Nov 26 12:38:28 compute-0 sudo[93372]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:28 compute-0 sudo[93514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:28 compute-0 sudo[93514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:28 compute-0 sudo[93514]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:28 compute-0 sudo[93539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:28 compute-0 sudo[93539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:28 compute-0 sudo[93539]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:28 compute-0 sudo[93564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:28 compute-0 sudo[93564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:28 compute-0 sudo[93564]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:28 compute-0 sudo[93589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:38:28 compute-0 sudo[93589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:28 compute-0 ceph-mon[74966]: OSD bench result of 23807.862739 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 12:38:28 compute-0 ceph-mon[74966]: osd.2 [v2:192.168.122.100:6810/4170874453,v1:192.168.122.100:6811/4170874453] boot
Nov 26 12:38:28 compute-0 ceph-mon[74966]: osdmap e15: 3 total, 3 up, 3 in
Nov 26 12:38:28 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 12:38:28 compute-0 ceph-mon[74966]: pgmap v32: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 26 12:38:28 compute-0 podman[93645]: 2025-11-26 12:38:28.761460594 +0000 UTC m=+0.037251000 container create a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:28 compute-0 systemd[1]: Started libpod-conmon-a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8.scope.
Nov 26 12:38:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:28 compute-0 podman[93645]: 2025-11-26 12:38:28.807476196 +0000 UTC m=+0.083266613 container init a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mahavira, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:28 compute-0 podman[93645]: 2025-11-26 12:38:28.812160788 +0000 UTC m=+0.087951196 container start a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 12:38:28 compute-0 podman[93645]: 2025-11-26 12:38:28.813527214 +0000 UTC m=+0.089317622 container attach a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mahavira, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:28 compute-0 jovial_mahavira[93658]: 167 167
Nov 26 12:38:28 compute-0 systemd[1]: libpod-a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8.scope: Deactivated successfully.
Nov 26 12:38:28 compute-0 podman[93645]: 2025-11-26 12:38:28.815786599 +0000 UTC m=+0.091577006 container died a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mahavira, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 12:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a3057ea3ca8ef41a91bd8c054737a4fa7ac7e14ea3c96f31c2a6a3e78a2e006-merged.mount: Deactivated successfully.
Nov 26 12:38:28 compute-0 podman[93645]: 2025-11-26 12:38:28.832147174 +0000 UTC m=+0.107937580 container remove a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:28 compute-0 podman[93645]: 2025-11-26 12:38:28.75054934 +0000 UTC m=+0.026339757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:28 compute-0 systemd[1]: libpod-conmon-a730a96d873ecbdd0dce514b29e6dfabbba8b9b917258cc039bd02e46169c9e8.scope: Deactivated successfully.
Nov 26 12:38:28 compute-0 podman[93680]: 2025-11-26 12:38:28.94455942 +0000 UTC m=+0.027507314 container create ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 12:38:28 compute-0 systemd[1]: Started libpod-conmon-ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a.scope.
Nov 26 12:38:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706cd7cac267c550c09c15045cf05dd92b1ad08eeba34c07aeb1d68c175c7957/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706cd7cac267c550c09c15045cf05dd92b1ad08eeba34c07aeb1d68c175c7957/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706cd7cac267c550c09c15045cf05dd92b1ad08eeba34c07aeb1d68c175c7957/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706cd7cac267c550c09c15045cf05dd92b1ad08eeba34c07aeb1d68c175c7957/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:28 compute-0 podman[93680]: 2025-11-26 12:38:28.994111874 +0000 UTC m=+0.077059779 container init ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 26 12:38:29 compute-0 podman[93680]: 2025-11-26 12:38:29.000098119 +0000 UTC m=+0.083046014 container start ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 12:38:29 compute-0 podman[93680]: 2025-11-26 12:38:29.001382029 +0000 UTC m=+0.084329944 container attach ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:29 compute-0 podman[93680]: 2025-11-26 12:38:28.933024344 +0000 UTC m=+0.015972260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]: {
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:     "0": [
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:         {
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "devices": [
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "/dev/loop3"
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             ],
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "lv_name": "ceph_lv0",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "lv_size": "21470642176",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "name": "ceph_lv0",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "tags": {
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.cluster_name": "ceph",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.crush_device_class": "",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.encrypted": "0",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.osd_id": "0",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.type": "block",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.vdo": "0"
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             },
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "type": "block",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "vg_name": "ceph_vg0"
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:         }
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:     ],
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:     "1": [
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:         {
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "devices": [
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "/dev/loop4"
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             ],
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "lv_name": "ceph_lv1",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "lv_size": "21470642176",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "name": "ceph_lv1",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "tags": {
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.cluster_name": "ceph",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.crush_device_class": "",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.encrypted": "0",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.osd_id": "1",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.type": "block",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.vdo": "0"
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             },
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "type": "block",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "vg_name": "ceph_vg1"
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:         }
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:     ],
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:     "2": [
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:         {
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "devices": [
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "/dev/loop5"
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             ],
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "lv_name": "ceph_lv2",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "lv_size": "21470642176",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "name": "ceph_lv2",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "tags": {
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.cluster_name": "ceph",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.crush_device_class": "",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.encrypted": "0",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.osd_id": "2",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.type": "block",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:                 "ceph.vdo": "0"
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             },
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "type": "block",
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:             "vg_name": "ceph_vg2"
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:         }
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]:     ]
Nov 26 12:38:29 compute-0 peaceful_liskov[93693]: }
Nov 26 12:38:29 compute-0 systemd[1]: libpod-ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a.scope: Deactivated successfully.
Nov 26 12:38:29 compute-0 conmon[93693]: conmon ec03ea21a1c3cfaa1ff8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a.scope/container/memory.events
Nov 26 12:38:29 compute-0 podman[93680]: 2025-11-26 12:38:29.637979059 +0000 UTC m=+0.720926953 container died ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-706cd7cac267c550c09c15045cf05dd92b1ad08eeba34c07aeb1d68c175c7957-merged.mount: Deactivated successfully.
Nov 26 12:38:29 compute-0 podman[93680]: 2025-11-26 12:38:29.669433942 +0000 UTC m=+0.752381837 container remove ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:29 compute-0 systemd[1]: libpod-conmon-ec03ea21a1c3cfaa1ff85be971617d2008c3c28c43e0843406fc594ffd56d77a.scope: Deactivated successfully.
Nov 26 12:38:29 compute-0 sudo[93589]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:29 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 26 12:38:29 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Nov 26 12:38:29 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Nov 26 12:38:29 compute-0 sudo[93712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:29 compute-0 sudo[93712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:29 compute-0 sudo[93712]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:29 compute-0 sudo[93737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:29 compute-0 sudo[93737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:29 compute-0 sudo[93737]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:29 compute-0 sudo[93762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:29 compute-0 sudo[93762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:29 compute-0 sudo[93762]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:29 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 26 12:38:29 compute-0 sudo[93787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:38:29 compute-0 sudo[93787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:29 compute-0 sudo[93835]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxneeaezrgitjkzwfrmoyplinhrrepcx ; /usr/bin/python3'
Nov 26 12:38:29 compute-0 sudo[93835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:30 compute-0 python3[93837]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:30 compute-0 podman[93868]: 2025-11-26 12:38:30.087022175 +0000 UTC m=+0.028171221 container create c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:30 compute-0 podman[93876]: 2025-11-26 12:38:30.107238445 +0000 UTC m=+0.032125994 container create fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f (image=quay.io/ceph/ceph:v18, name=nifty_hermann, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:30 compute-0 systemd[1]: Started libpod-conmon-c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2.scope.
Nov 26 12:38:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:30 compute-0 systemd[1]: Started libpod-conmon-fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f.scope.
Nov 26 12:38:30 compute-0 podman[93868]: 2025-11-26 12:38:30.148867638 +0000 UTC m=+0.090016694 container init c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 12:38:30 compute-0 podman[93868]: 2025-11-26 12:38:30.15335611 +0000 UTC m=+0.094505156 container start c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 12:38:30 compute-0 sharp_keller[93892]: 167 167
Nov 26 12:38:30 compute-0 podman[93868]: 2025-11-26 12:38:30.156339285 +0000 UTC m=+0.097488351 container attach c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 26 12:38:30 compute-0 podman[93868]: 2025-11-26 12:38:30.15727249 +0000 UTC m=+0.098421586 container died c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 12:38:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:30 compute-0 systemd[1]: libpod-c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2.scope: Deactivated successfully.
Nov 26 12:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4a7c5ecbe6c449ee9569c74e5ff6b3d6823bf0030c57ea62cff236e1310218/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4a7c5ecbe6c449ee9569c74e5ff6b3d6823bf0030c57ea62cff236e1310218/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4a7c5ecbe6c449ee9569c74e5ff6b3d6823bf0030c57ea62cff236e1310218/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8a47751961abb2faf8db3e1ac28aa61078b2296bb91cbb17cc390816c1377ec-merged.mount: Deactivated successfully.
Nov 26 12:38:30 compute-0 podman[93868]: 2025-11-26 12:38:30.075643997 +0000 UTC m=+0.016793064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:30 compute-0 podman[93876]: 2025-11-26 12:38:30.173793477 +0000 UTC m=+0.098681046 container init fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f (image=quay.io/ceph/ceph:v18, name=nifty_hermann, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:30 compute-0 podman[93876]: 2025-11-26 12:38:30.182807462 +0000 UTC m=+0.107695011 container start fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f (image=quay.io/ceph/ceph:v18, name=nifty_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 12:38:30 compute-0 podman[93868]: 2025-11-26 12:38:30.184228992 +0000 UTC m=+0.125378038 container remove c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:30 compute-0 podman[93876]: 2025-11-26 12:38:30.189477241 +0000 UTC m=+0.114364810 container attach fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f (image=quay.io/ceph/ceph:v18, name=nifty_hermann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:30 compute-0 podman[93876]: 2025-11-26 12:38:30.095651191 +0000 UTC m=+0.020538760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:30 compute-0 systemd[1]: libpod-conmon-c1433874aab6f870a426e379022d4f24eea7c804aa3e1bc38f6240aaf01028f2.scope: Deactivated successfully.
Nov 26 12:38:30 compute-0 podman[93920]: 2025-11-26 12:38:30.291566456 +0000 UTC m=+0.026262971 container create 09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brown, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:30 compute-0 systemd[1]: Started libpod-conmon-09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242.scope.
Nov 26 12:38:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e9048fec6d6bee188a7143c831a92eec706e8da86b859787150b77bada1e993/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e9048fec6d6bee188a7143c831a92eec706e8da86b859787150b77bada1e993/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e9048fec6d6bee188a7143c831a92eec706e8da86b859787150b77bada1e993/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e9048fec6d6bee188a7143c831a92eec706e8da86b859787150b77bada1e993/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:30 compute-0 podman[93920]: 2025-11-26 12:38:30.353093286 +0000 UTC m=+0.087789821 container init 09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brown, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:30 compute-0 podman[93920]: 2025-11-26 12:38:30.357547883 +0000 UTC m=+0.092244399 container start 09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:30 compute-0 podman[93920]: 2025-11-26 12:38:30.359088729 +0000 UTC m=+0.093785244 container attach 09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 12:38:30 compute-0 podman[93920]: 2025-11-26 12:38:30.280511239 +0000 UTC m=+0.015207774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 12:38:30 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3764092662' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 12:38:30 compute-0 nifty_hermann[93897]: 
Nov 26 12:38:30 compute-0 nifty_hermann[93897]: {"fsid":"f7d7fe93-41e5-51c4-b72d-63b38686102e","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":94,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1764160707,"num_in_osds":3,"osd_in_since":1764160688,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"creating+peering","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":474587136,"bytes_avail":42466697216,"bytes_total":42941284352,"inactive_pgs_ratio":1},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-26T12:36:53.922147+0000","services":{}},"progress_events":{}}
Nov 26 12:38:30 compute-0 systemd[1]: libpod-fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f.scope: Deactivated successfully.
Nov 26 12:38:30 compute-0 podman[93876]: 2025-11-26 12:38:30.677164784 +0000 UTC m=+0.602052333 container died fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f (image=quay.io/ceph/ceph:v18, name=nifty_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 12:38:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f4a7c5ecbe6c449ee9569c74e5ff6b3d6823bf0030c57ea62cff236e1310218-merged.mount: Deactivated successfully.
Nov 26 12:38:30 compute-0 podman[93876]: 2025-11-26 12:38:30.700692588 +0000 UTC m=+0.625580137 container remove fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f (image=quay.io/ceph/ceph:v18, name=nifty_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:30 compute-0 ceph-mon[74966]: osdmap e16: 3 total, 3 up, 3 in
Nov 26 12:38:30 compute-0 ceph-mon[74966]: pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 26 12:38:30 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3764092662' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 12:38:30 compute-0 sudo[93835]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:30 compute-0 systemd[1]: libpod-conmon-fb40e672489f5d671cfc1dfbaff9f6c6e395c63dbde47c175c38439ee73fab5f.scope: Deactivated successfully.
Nov 26 12:38:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:38:30 compute-0 sudo[93995]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jofyujzjcnujwajynuoqwfsitsunreis ; /usr/bin/python3'
Nov 26 12:38:30 compute-0 sudo[93995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:31 compute-0 python3[94000]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:31 compute-0 podman[94016]: 2025-11-26 12:38:31.100558681 +0000 UTC m=+0.031601111 container create 759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:31 compute-0 silly_brown[93934]: {
Nov 26 12:38:31 compute-0 silly_brown[93934]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:38:31 compute-0 silly_brown[93934]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:31 compute-0 silly_brown[93934]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:38:31 compute-0 silly_brown[93934]:         "osd_id": 1,
Nov 26 12:38:31 compute-0 silly_brown[93934]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:38:31 compute-0 silly_brown[93934]:         "type": "bluestore"
Nov 26 12:38:31 compute-0 silly_brown[93934]:     },
Nov 26 12:38:31 compute-0 silly_brown[93934]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:38:31 compute-0 silly_brown[93934]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:31 compute-0 silly_brown[93934]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:38:31 compute-0 silly_brown[93934]:         "osd_id": 2,
Nov 26 12:38:31 compute-0 silly_brown[93934]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:38:31 compute-0 silly_brown[93934]:         "type": "bluestore"
Nov 26 12:38:31 compute-0 silly_brown[93934]:     },
Nov 26 12:38:31 compute-0 silly_brown[93934]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:38:31 compute-0 silly_brown[93934]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:31 compute-0 silly_brown[93934]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:38:31 compute-0 silly_brown[93934]:         "osd_id": 0,
Nov 26 12:38:31 compute-0 silly_brown[93934]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:38:31 compute-0 silly_brown[93934]:         "type": "bluestore"
Nov 26 12:38:31 compute-0 silly_brown[93934]:     }
Nov 26 12:38:31 compute-0 silly_brown[93934]: }
Nov 26 12:38:31 compute-0 systemd[1]: Started libpod-conmon-759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2.scope.
Nov 26 12:38:31 compute-0 podman[93920]: 2025-11-26 12:38:31.140313738 +0000 UTC m=+0.875010252 container died 09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brown, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:31 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:31 compute-0 systemd[1]: libpod-09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242.scope: Deactivated successfully.
Nov 26 12:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4ae793a83ee8bafc1a816f51014ba0aad83e8e33478b075fb0ec6469b9ed212/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4ae793a83ee8bafc1a816f51014ba0aad83e8e33478b075fb0ec6469b9ed212/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:31 compute-0 podman[94016]: 2025-11-26 12:38:31.158382885 +0000 UTC m=+0.089425325 container init 759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e9048fec6d6bee188a7143c831a92eec706e8da86b859787150b77bada1e993-merged.mount: Deactivated successfully.
Nov 26 12:38:31 compute-0 podman[94016]: 2025-11-26 12:38:31.162981836 +0000 UTC m=+0.094024256 container start 759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 12:38:31 compute-0 podman[94016]: 2025-11-26 12:38:31.165484532 +0000 UTC m=+0.096526952 container attach 759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 12:38:31 compute-0 podman[93920]: 2025-11-26 12:38:31.177111079 +0000 UTC m=+0.911807594 container remove 09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brown, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:31 compute-0 podman[94016]: 2025-11-26 12:38:31.089505067 +0000 UTC m=+0.020547506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:31 compute-0 systemd[1]: libpod-conmon-09df7f9035770f131260fa1f987d27638b985ebd6d2ee103b739ed3634ffb242.scope: Deactivated successfully.
Nov 26 12:38:31 compute-0 sudo[93787]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:38:31 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:38:31 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:31 compute-0 sudo[94050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:31 compute-0 sudo[94050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:31 compute-0 sudo[94050]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:31 compute-0 sudo[94075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:38:31 compute-0 sudo[94075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:31 compute-0 sudo[94075]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 12:38:31 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4174859022' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 12:38:31 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 26 12:38:32 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:32 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:32 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/4174859022' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 12:38:32 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 26 12:38:32 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4174859022' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 12:38:32 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Nov 26 12:38:32 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Nov 26 12:38:32 compute-0 exciting_bartik[94037]: pool 'vms' created
Nov 26 12:38:32 compute-0 systemd[1]: libpod-759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2.scope: Deactivated successfully.
Nov 26 12:38:32 compute-0 podman[94016]: 2025-11-26 12:38:32.228804538 +0000 UTC m=+1.159846968 container died 759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 12:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4ae793a83ee8bafc1a816f51014ba0aad83e8e33478b075fb0ec6469b9ed212-merged.mount: Deactivated successfully.
Nov 26 12:38:32 compute-0 podman[94016]: 2025-11-26 12:38:32.24925979 +0000 UTC m=+1.180302210 container remove 759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 12:38:32 compute-0 sudo[93995]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:32 compute-0 systemd[1]: libpod-conmon-759e401f3365703fc94d8f6495204699de952d55d6c931f76df77d10ab9d47e2.scope: Deactivated successfully.
Nov 26 12:38:32 compute-0 sudo[94156]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkxlfxziaxoazwyulfenuuatrwrizhyf ; /usr/bin/python3'
Nov 26 12:38:32 compute-0 sudo[94156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:32 compute-0 python3[94158]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:32 compute-0 podman[94159]: 2025-11-26 12:38:32.500885769 +0000 UTC m=+0.026874999 container create db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f (image=quay.io/ceph/ceph:v18, name=quirky_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 12:38:32 compute-0 systemd[1]: Started libpod-conmon-db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f.scope.
Nov 26 12:38:32 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e0df6eab22c2dccc1257731891afcec6eb7910508e236b71fb15d844e67241/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e0df6eab22c2dccc1257731891afcec6eb7910508e236b71fb15d844e67241/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:32 compute-0 podman[94159]: 2025-11-26 12:38:32.551239439 +0000 UTC m=+0.077228658 container init db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f (image=quay.io/ceph/ceph:v18, name=quirky_perlman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:32 compute-0 podman[94159]: 2025-11-26 12:38:32.555329117 +0000 UTC m=+0.081318336 container start db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f (image=quay.io/ceph/ceph:v18, name=quirky_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 12:38:32 compute-0 podman[94159]: 2025-11-26 12:38:32.556475476 +0000 UTC m=+0.082464695 container attach db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f (image=quay.io/ceph/ceph:v18, name=quirky_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 12:38:32 compute-0 podman[94159]: 2025-11-26 12:38:32.49014092 +0000 UTC m=+0.016130159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:32 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:32 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 12:38:32 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3228469465' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 12:38:33 compute-0 ceph-mon[74966]: pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 26 12:38:33 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/4174859022' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 12:38:33 compute-0 ceph-mon[74966]: osdmap e17: 3 total, 3 up, 3 in
Nov 26 12:38:33 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3228469465' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 12:38:33 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 26 12:38:33 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3228469465' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 12:38:33 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Nov 26 12:38:33 compute-0 quirky_perlman[94171]: pool 'volumes' created
Nov 26 12:38:33 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Nov 26 12:38:33 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:33 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:33 compute-0 systemd[1]: libpod-db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f.scope: Deactivated successfully.
Nov 26 12:38:33 compute-0 conmon[94171]: conmon db8b0728b3576c35b751 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f.scope/container/memory.events
Nov 26 12:38:33 compute-0 podman[94159]: 2025-11-26 12:38:33.237541323 +0000 UTC m=+0.763530542 container died db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f (image=quay.io/ceph/ceph:v18, name=quirky_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 12:38:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-46e0df6eab22c2dccc1257731891afcec6eb7910508e236b71fb15d844e67241-merged.mount: Deactivated successfully.
Nov 26 12:38:33 compute-0 podman[94159]: 2025-11-26 12:38:33.259983544 +0000 UTC m=+0.785972763 container remove db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f (image=quay.io/ceph/ceph:v18, name=quirky_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 12:38:33 compute-0 systemd[1]: libpod-conmon-db8b0728b3576c35b7515d8358a25386154ace06ea6132174ebd18a59d11938f.scope: Deactivated successfully.
Nov 26 12:38:33 compute-0 sudo[94156]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:33 compute-0 sudo[94231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwlzgemtknzxhhjwdbdaveeuduyjmtdj ; /usr/bin/python3'
Nov 26 12:38:33 compute-0 sudo[94231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:33 compute-0 python3[94233]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:33 compute-0 podman[94234]: 2025-11-26 12:38:33.506641344 +0000 UTC m=+0.027825558 container create 5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb (image=quay.io/ceph/ceph:v18, name=relaxed_bell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:33 compute-0 systemd[1]: Started libpod-conmon-5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb.scope.
Nov 26 12:38:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c1350c6b56a9c670f0f696969d57b53cae8b155b00a255e28606644f4d29abb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c1350c6b56a9c670f0f696969d57b53cae8b155b00a255e28606644f4d29abb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:33 compute-0 podman[94234]: 2025-11-26 12:38:33.567043645 +0000 UTC m=+0.088227869 container init 5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb (image=quay.io/ceph/ceph:v18, name=relaxed_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:33 compute-0 podman[94234]: 2025-11-26 12:38:33.571351786 +0000 UTC m=+0.092535991 container start 5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb (image=quay.io/ceph/ceph:v18, name=relaxed_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 26 12:38:33 compute-0 podman[94234]: 2025-11-26 12:38:33.57233124 +0000 UTC m=+0.093515464 container attach 5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb (image=quay.io/ceph/ceph:v18, name=relaxed_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 12:38:33 compute-0 podman[94234]: 2025-11-26 12:38:33.495678381 +0000 UTC m=+0.016862605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:33 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v38: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:33 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 12:38:33 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3584447080' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 12:38:34 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 26 12:38:34 compute-0 ceph-mon[74966]: log_channel(cluster) log [WRN] : Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 12:38:34 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3584447080' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 12:38:34 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 26 12:38:34 compute-0 relaxed_bell[94246]: pool 'backups' created
Nov 26 12:38:34 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 26 12:38:34 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3228469465' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 12:38:34 compute-0 ceph-mon[74966]: osdmap e18: 3 total, 3 up, 3 in
Nov 26 12:38:34 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3584447080' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 12:38:34 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:34 compute-0 systemd[1]: libpod-5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb.scope: Deactivated successfully.
Nov 26 12:38:34 compute-0 conmon[94246]: conmon 5738eb540c01d983e7f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb.scope/container/memory.events
Nov 26 12:38:34 compute-0 podman[94234]: 2025-11-26 12:38:34.241465902 +0000 UTC m=+0.762650096 container died 5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb (image=quay.io/ceph/ceph:v18, name=relaxed_bell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 12:38:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c1350c6b56a9c670f0f696969d57b53cae8b155b00a255e28606644f4d29abb-merged.mount: Deactivated successfully.
Nov 26 12:38:34 compute-0 podman[94234]: 2025-11-26 12:38:34.262383327 +0000 UTC m=+0.783567532 container remove 5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb (image=quay.io/ceph/ceph:v18, name=relaxed_bell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 12:38:34 compute-0 systemd[1]: libpod-conmon-5738eb540c01d983e7f45a047f3d31a3be76028665d50dcf7db48c9dbc37c4bb.scope: Deactivated successfully.
Nov 26 12:38:34 compute-0 sudo[94231]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:34 compute-0 sudo[94306]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwqovjpqodlpmxwdorkemytdrpuckuym ; /usr/bin/python3'
Nov 26 12:38:34 compute-0 sudo[94306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:34 compute-0 python3[94308]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:34 compute-0 podman[94309]: 2025-11-26 12:38:34.507181159 +0000 UTC m=+0.027452171 container create 422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9 (image=quay.io/ceph/ceph:v18, name=infallible_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 12:38:34 compute-0 systemd[1]: Started libpod-conmon-422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9.scope.
Nov 26 12:38:34 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a364fa04130773863a74d27dce78f5ff05de75179d9b620828b16fd5616dffc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a364fa04130773863a74d27dce78f5ff05de75179d9b620828b16fd5616dffc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:34 compute-0 podman[94309]: 2025-11-26 12:38:34.554298787 +0000 UTC m=+0.074569799 container init 422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9 (image=quay.io/ceph/ceph:v18, name=infallible_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:34 compute-0 podman[94309]: 2025-11-26 12:38:34.558377444 +0000 UTC m=+0.078648455 container start 422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9 (image=quay.io/ceph/ceph:v18, name=infallible_noyce, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:34 compute-0 podman[94309]: 2025-11-26 12:38:34.559521207 +0000 UTC m=+0.079792219 container attach 422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9 (image=quay.io/ceph/ceph:v18, name=infallible_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:34 compute-0 podman[94309]: 2025-11-26 12:38:34.49583386 +0000 UTC m=+0.016104872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:34 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:34 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 12:38:34 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1051002342' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 12:38:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 26 12:38:35 compute-0 ceph-mon[74966]: pgmap v38: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:35 compute-0 ceph-mon[74966]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 12:38:35 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3584447080' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 12:38:35 compute-0 ceph-mon[74966]: osdmap e19: 3 total, 3 up, 3 in
Nov 26 12:38:35 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1051002342' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 12:38:35 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1051002342' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 12:38:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 26 12:38:35 compute-0 infallible_noyce[94322]: pool 'images' created
Nov 26 12:38:35 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 26 12:38:35 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:35 compute-0 systemd[1]: libpod-422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9.scope: Deactivated successfully.
Nov 26 12:38:35 compute-0 conmon[94322]: conmon 422a7128068f7b120455 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9.scope/container/memory.events
Nov 26 12:38:35 compute-0 podman[94309]: 2025-11-26 12:38:35.253505165 +0000 UTC m=+0.773776177 container died 422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9 (image=quay.io/ceph/ceph:v18, name=infallible_noyce, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a364fa04130773863a74d27dce78f5ff05de75179d9b620828b16fd5616dffc-merged.mount: Deactivated successfully.
Nov 26 12:38:35 compute-0 podman[94309]: 2025-11-26 12:38:35.274062309 +0000 UTC m=+0.794333321 container remove 422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9 (image=quay.io/ceph/ceph:v18, name=infallible_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:35 compute-0 systemd[1]: libpod-conmon-422a7128068f7b1204559552e5120cdbad0ac7d1e7e79430ee0cc11385c104f9.scope: Deactivated successfully.
Nov 26 12:38:35 compute-0 sudo[94306]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:35 compute-0 sudo[94382]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siiyhneoazoyatluuznfobrtdzxrdotb ; /usr/bin/python3'
Nov 26 12:38:35 compute-0 sudo[94382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:35 compute-0 python3[94384]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:35 compute-0 podman[94385]: 2025-11-26 12:38:35.522033785 +0000 UTC m=+0.026447460 container create bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001 (image=quay.io/ceph/ceph:v18, name=mystifying_buck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:35 compute-0 systemd[1]: Started libpod-conmon-bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001.scope.
Nov 26 12:38:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2565d444ecd1d3a88580370fb35cfee5afce4f7a270d1582269903b66e631e5f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2565d444ecd1d3a88580370fb35cfee5afce4f7a270d1582269903b66e631e5f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:35 compute-0 podman[94385]: 2025-11-26 12:38:35.580641542 +0000 UTC m=+0.085055227 container init bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001 (image=quay.io/ceph/ceph:v18, name=mystifying_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 12:38:35 compute-0 podman[94385]: 2025-11-26 12:38:35.585018943 +0000 UTC m=+0.089432629 container start bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001 (image=quay.io/ceph/ceph:v18, name=mystifying_buck, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 12:38:35 compute-0 podman[94385]: 2025-11-26 12:38:35.586210559 +0000 UTC m=+0.090624233 container attach bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001 (image=quay.io/ceph/ceph:v18, name=mystifying_buck, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:35 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:35 compute-0 podman[94385]: 2025-11-26 12:38:35.51181459 +0000 UTC m=+0.016228285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v41: 5 pgs: 2 unknown, 3 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:38:35
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Some PGs (0.400000) are unknown; try again later
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 12:38:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 12:38:35 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:38:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:38:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:38:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 12:38:36 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/492052497' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 12:38:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 26 12:38:36 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:38:36 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/492052497' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 12:38:36 compute-0 mystifying_buck[94397]: pool 'cephfs.cephfs.meta' created
Nov 26 12:38:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 26 12:38:36 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 26 12:38:36 compute-0 ceph-mgr[75236]: [progress INFO root] update: starting ev a5a5e78d-23d6-4243-a80d-24d48f919f2e (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 26 12:38:36 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1051002342' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 12:38:36 compute-0 ceph-mon[74966]: osdmap e20: 3 total, 3 up, 3 in
Nov 26 12:38:36 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:38:36 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/492052497' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 12:38:36 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 12:38:36 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:38:36 compute-0 systemd[1]: libpod-bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001.scope: Deactivated successfully.
Nov 26 12:38:36 compute-0 podman[94385]: 2025-11-26 12:38:36.254005005 +0000 UTC m=+0.758418680 container died bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001 (image=quay.io/ceph/ceph:v18, name=mystifying_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 12:38:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2565d444ecd1d3a88580370fb35cfee5afce4f7a270d1582269903b66e631e5f-merged.mount: Deactivated successfully.
Nov 26 12:38:36 compute-0 podman[94385]: 2025-11-26 12:38:36.275824768 +0000 UTC m=+0.780238443 container remove bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001 (image=quay.io/ceph/ceph:v18, name=mystifying_buck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 12:38:36 compute-0 systemd[1]: libpod-conmon-bb2acb31044c9021199e1586ed8983900c856ceac3f35f9203c02a50423f9001.scope: Deactivated successfully.
Nov 26 12:38:36 compute-0 sudo[94382]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:36 compute-0 sudo[94456]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrpfkgindqnjsaebgmuvqwzvcfwlvsiq ; /usr/bin/python3'
Nov 26 12:38:36 compute-0 sudo[94456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:36 compute-0 python3[94458]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:36 compute-0 podman[94459]: 2025-11-26 12:38:36.517953639 +0000 UTC m=+0.024851340 container create ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9 (image=quay.io/ceph/ceph:v18, name=brave_austin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:36 compute-0 systemd[1]: Started libpod-conmon-ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9.scope.
Nov 26 12:38:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a52ca279ea5c0f8f3c2e15995eceff44f648a94228260d776fde2bff9c1f659/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a52ca279ea5c0f8f3c2e15995eceff44f648a94228260d776fde2bff9c1f659/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:36 compute-0 podman[94459]: 2025-11-26 12:38:36.559017743 +0000 UTC m=+0.065915464 container init ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9 (image=quay.io/ceph/ceph:v18, name=brave_austin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 12:38:36 compute-0 podman[94459]: 2025-11-26 12:38:36.562633193 +0000 UTC m=+0.069530894 container start ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9 (image=quay.io/ceph/ceph:v18, name=brave_austin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 26 12:38:36 compute-0 podman[94459]: 2025-11-26 12:38:36.563561811 +0000 UTC m=+0.070459532 container attach ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9 (image=quay.io/ceph/ceph:v18, name=brave_austin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:36 compute-0 podman[94459]: 2025-11-26 12:38:36.508227828 +0000 UTC m=+0.015125549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:36 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 12:38:36 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1590599154' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 12:38:37 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 26 12:38:37 compute-0 ceph-mon[74966]: pgmap v41: 5 pgs: 2 unknown, 3 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:37 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:38:37 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/492052497' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 12:38:37 compute-0 ceph-mon[74966]: osdmap e21: 3 total, 3 up, 3 in
Nov 26 12:38:37 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:38:37 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1590599154' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 12:38:37 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:38:37 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1590599154' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 12:38:37 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 26 12:38:37 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 26 12:38:37 compute-0 brave_austin[94471]: pool 'cephfs.cephfs.data' created
Nov 26 12:38:37 compute-0 ceph-mgr[75236]: [progress INFO root] update: starting ev 095fcc50-4d3c-478f-90e7-89107ae53431 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 26 12:38:37 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 12:38:37 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:38:37 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:37 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:37 compute-0 systemd[1]: libpod-ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9.scope: Deactivated successfully.
Nov 26 12:38:37 compute-0 podman[94459]: 2025-11-26 12:38:37.264421747 +0000 UTC m=+0.771319448 container died ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9 (image=quay.io/ceph/ceph:v18, name=brave_austin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 12:38:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a52ca279ea5c0f8f3c2e15995eceff44f648a94228260d776fde2bff9c1f659-merged.mount: Deactivated successfully.
Nov 26 12:38:37 compute-0 podman[94459]: 2025-11-26 12:38:37.285775178 +0000 UTC m=+0.792672879 container remove ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9 (image=quay.io/ceph/ceph:v18, name=brave_austin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 26 12:38:37 compute-0 systemd[1]: libpod-conmon-ba8eedabb5028ea407c5606984ff3d73f6bcd6d8f357c45bea729b89a883ffa9.scope: Deactivated successfully.
Nov 26 12:38:37 compute-0 sudo[94456]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:37 compute-0 sudo[94532]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yesdztboegwnylfossivtxctmhbeyavv ; /usr/bin/python3'
Nov 26 12:38:37 compute-0 sudo[94532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:37 compute-0 python3[94534]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:37 compute-0 podman[94535]: 2025-11-26 12:38:37.552936872 +0000 UTC m=+0.027865382 container create 3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8 (image=quay.io/ceph/ceph:v18, name=adoring_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 12:38:37 compute-0 systemd[1]: Started libpod-conmon-3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8.scope.
Nov 26 12:38:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7db3b8ff802abb76c1e3d351239e61914f48de42a61a1324a459802062367/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7db3b8ff802abb76c1e3d351239e61914f48de42a61a1324a459802062367/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:37 compute-0 podman[94535]: 2025-11-26 12:38:37.589309149 +0000 UTC m=+0.064237669 container init 3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8 (image=quay.io/ceph/ceph:v18, name=adoring_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:37 compute-0 podman[94535]: 2025-11-26 12:38:37.593450125 +0000 UTC m=+0.068378634 container start 3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8 (image=quay.io/ceph/ceph:v18, name=adoring_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 12:38:37 compute-0 podman[94535]: 2025-11-26 12:38:37.594789428 +0000 UTC m=+0.069717958 container attach 3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8 (image=quay.io/ceph/ceph:v18, name=adoring_haibt, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 12:38:37 compute-0 podman[94535]: 2025-11-26 12:38:37.541887146 +0000 UTC m=+0.016815676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:37 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v44: 7 pgs: 3 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:37 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 12:38:37 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:37 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 12:38:37 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:38 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 26 12:38:38 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3930302744' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 26 12:38:38 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 26 12:38:38 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:38:38 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:38:38 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:38:38 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3930302744' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 26 12:38:38 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 26 12:38:38 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 23 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=23 pruub=10.971715927s) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active pruub 25.439893723s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:38 compute-0 adoring_haibt[94547]: enabled application 'rbd' on pool 'vms'
Nov 26 12:38:38 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 26 12:38:38 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 23 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=23 pruub=10.971715927s) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown pruub 25.439893723s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:38 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 23 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=23 pruub=11.972233772s) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active pruub 30.163213730s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:38 compute-0 ceph-mgr[75236]: [progress INFO root] update: starting ev 0b6812b7-a6f8-4a62-8625-03f8393508e0 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 26 12:38:38 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 23 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=23 pruub=11.972233772s) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown pruub 30.163213730s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:38 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 12:38:38 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:38:38 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:38:38 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1590599154' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 12:38:38 compute-0 ceph-mon[74966]: osdmap e22: 3 total, 3 up, 3 in
Nov 26 12:38:38 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:38:38 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:38 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:38 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3930302744' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 26 12:38:38 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:38 compute-0 systemd[1]: libpod-3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8.scope: Deactivated successfully.
Nov 26 12:38:38 compute-0 podman[94572]: 2025-11-26 12:38:38.302133864 +0000 UTC m=+0.016036563 container died 3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8 (image=quay.io/ceph/ceph:v18, name=adoring_haibt, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-01e7db3b8ff802abb76c1e3d351239e61914f48de42a61a1324a459802062367-merged.mount: Deactivated successfully.
Nov 26 12:38:38 compute-0 podman[94572]: 2025-11-26 12:38:38.320279816 +0000 UTC m=+0.034182495 container remove 3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8 (image=quay.io/ceph/ceph:v18, name=adoring_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 26 12:38:38 compute-0 systemd[1]: libpod-conmon-3efcec20678ef11b3d87c1616d63eb9ef955d9d4dfbc240d98b82d3c3f8cbfd8.scope: Deactivated successfully.
Nov 26 12:38:38 compute-0 sudo[94532]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:38 compute-0 sudo[94606]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbhdwkuredoxzzsvtxxutkztccnjelnw ; /usr/bin/python3'
Nov 26 12:38:38 compute-0 sudo[94606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:38 compute-0 python3[94608]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:38 compute-0 podman[94609]: 2025-11-26 12:38:38.565732306 +0000 UTC m=+0.026441910 container create 8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2 (image=quay.io/ceph/ceph:v18, name=sweet_greider, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:38 compute-0 systemd[1]: Started libpod-conmon-8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2.scope.
Nov 26 12:38:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00826225623a65c0a2e303c6fd53379c8e948f9ad9e8769c9d8d2d8a0df0af58/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00826225623a65c0a2e303c6fd53379c8e948f9ad9e8769c9d8d2d8a0df0af58/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:38 compute-0 podman[94609]: 2025-11-26 12:38:38.611699496 +0000 UTC m=+0.072409100 container init 8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2 (image=quay.io/ceph/ceph:v18, name=sweet_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 12:38:38 compute-0 podman[94609]: 2025-11-26 12:38:38.61571337 +0000 UTC m=+0.076422974 container start 8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2 (image=quay.io/ceph/ceph:v18, name=sweet_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 12:38:38 compute-0 podman[94609]: 2025-11-26 12:38:38.616817369 +0000 UTC m=+0.077526974 container attach 8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2 (image=quay.io/ceph/ceph:v18, name=sweet_greider, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 12:38:38 compute-0 podman[94609]: 2025-11-26 12:38:38.555294757 +0000 UTC m=+0.016004381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:39 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 26 12:38:39 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/568858784' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 26 12:38:39 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 26 12:38:39 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:38:39 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/568858784' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 26 12:38:39 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 26 12:38:39 compute-0 sweet_greider[94621]: enabled application 'rbd' on pool 'volumes'
Nov 26 12:38:39 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 26 12:38:39 compute-0 ceph-mgr[75236]: [progress INFO root] update: starting ev 3b18ecbb-6643-45ed-9c0d-a4c4775f6645 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 26 12:38:39 compute-0 ceph-mgr[75236]: [progress INFO root] complete: finished ev a5a5e78d-23d6-4243-a80d-24d48f919f2e (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 26 12:38:39 compute-0 ceph-mgr[75236]: [progress INFO root] Completed event a5a5e78d-23d6-4243-a80d-24d48f919f2e (PG autoscaler increasing pool 2 PGs from 1 to 32) in 3 seconds
Nov 26 12:38:39 compute-0 ceph-mgr[75236]: [progress INFO root] complete: finished ev 095fcc50-4d3c-478f-90e7-89107ae53431 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 26 12:38:39 compute-0 ceph-mgr[75236]: [progress INFO root] Completed event 095fcc50-4d3c-478f-90e7-89107ae53431 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 2 seconds
Nov 26 12:38:39 compute-0 ceph-mgr[75236]: [progress INFO root] complete: finished ev 0b6812b7-a6f8-4a62-8625-03f8393508e0 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 26 12:38:39 compute-0 ceph-mgr[75236]: [progress INFO root] Completed event 0b6812b7-a6f8-4a62-8625-03f8393508e0 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 1 seconds
Nov 26 12:38:39 compute-0 ceph-mgr[75236]: [progress INFO root] complete: finished ev 3b18ecbb-6643-45ed-9c0d-a4c4775f6645 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 26 12:38:39 compute-0 ceph-mgr[75236]: [progress INFO root] Completed event 3b18ecbb-6643-45ed-9c0d-a4c4775f6645 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 0 seconds
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1d( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1c( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.b( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.a( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.9( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.8( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.6( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.5( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.4( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.3( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.2( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1f( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.7( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.c( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.d( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.e( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.f( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.10( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.11( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.12( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.13( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1e( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1f( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1d( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1c( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1b( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.a( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.9( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.8( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.7( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.6( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.5( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.3( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.4( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.2( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.b( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.c( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.d( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.e( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.f( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.11( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.13( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.14( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.15( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.16( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.17( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.18( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.19( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1a( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.10( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.12( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.14( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.15( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.16( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.17( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.18( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.19( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1a( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1b( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1e( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-mon[74966]: pgmap v44: 7 pgs: 3 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.8( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:38:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:38:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:38:39 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3930302744' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 26 12:38:39 compute-0 ceph-mon[74966]: osdmap e23: 3 total, 3 up, 3 in
Nov 26 12:38:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.9( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.7( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.6( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.5( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.3( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.0( empty local-lis/les=23/24 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.2( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.8( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.4( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/568858784' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 26 12:38:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:38:39 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/568858784' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 26 12:38:39 compute-0 ceph-mon[74966]: osdmap e24: 3 total, 3 up, 3 in
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.3( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.2( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.7( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.0( empty local-lis/les=23/24 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.11( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.14( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.16( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.11( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.14( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.16( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.17( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.13( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.18( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.15( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.1a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.10( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.12( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.17( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 24 pg[3.19( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [1] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 24 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [2] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:39 compute-0 systemd[1]: libpod-8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2.scope: Deactivated successfully.
Nov 26 12:38:39 compute-0 conmon[94621]: conmon 8cfdd0747598d806f5b3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2.scope/container/memory.events
Nov 26 12:38:39 compute-0 podman[94646]: 2025-11-26 12:38:39.300793021 +0000 UTC m=+0.015802028 container died 8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2 (image=quay.io/ceph/ceph:v18, name=sweet_greider, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 12:38:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-00826225623a65c0a2e303c6fd53379c8e948f9ad9e8769c9d8d2d8a0df0af58-merged.mount: Deactivated successfully.
Nov 26 12:38:39 compute-0 podman[94646]: 2025-11-26 12:38:39.323368745 +0000 UTC m=+0.038377732 container remove 8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2 (image=quay.io/ceph/ceph:v18, name=sweet_greider, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 12:38:39 compute-0 systemd[1]: libpod-conmon-8cfdd0747598d806f5b3928ff65c82743f33362e60dbc64e513b5cf86e8929a2.scope: Deactivated successfully.
Nov 26 12:38:39 compute-0 sudo[94606]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:39 compute-0 sudo[94681]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yueothdnaxqryywboftksqlaxnqffeyl ; /usr/bin/python3'
Nov 26 12:38:39 compute-0 sudo[94681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:39 compute-0 python3[94683]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:39 compute-0 podman[94684]: 2025-11-26 12:38:39.58974941 +0000 UTC m=+0.027151652 container create 6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769 (image=quay.io/ceph/ceph:v18, name=vigorous_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:39 compute-0 systemd[1]: Started libpod-conmon-6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769.scope.
Nov 26 12:38:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65589fb9917e834e7b0d837d487335992727f67616120bc82ca66320e774cd45/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65589fb9917e834e7b0d837d487335992727f67616120bc82ca66320e774cd45/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:39 compute-0 podman[94684]: 2025-11-26 12:38:39.640159708 +0000 UTC m=+0.077561960 container init 6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769 (image=quay.io/ceph/ceph:v18, name=vigorous_hellman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:39 compute-0 podman[94684]: 2025-11-26 12:38:39.644697784 +0000 UTC m=+0.082100026 container start 6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769 (image=quay.io/ceph/ceph:v18, name=vigorous_hellman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:39 compute-0 podman[94684]: 2025-11-26 12:38:39.645724797 +0000 UTC m=+0.083127059 container attach 6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769 (image=quay.io/ceph/ceph:v18, name=vigorous_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 12:38:39 compute-0 podman[94684]: 2025-11-26 12:38:39.578549981 +0000 UTC m=+0.015952224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:39 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 26 12:38:39 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 26 12:38:39 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v47: 69 pgs: 1 peering, 32 unknown, 36 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:39 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 12:38:39 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:39 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 12:38:39 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 26 12:38:40 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/941757284' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 26 12:38:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 26 12:38:40 compute-0 ceph-mon[74966]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 12:38:40 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:38:40 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:38:40 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/941757284' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 26 12:38:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 26 12:38:40 compute-0 vigorous_hellman[94697]: enabled application 'rbd' on pool 'backups'
Nov 26 12:38:40 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 26 12:38:40 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 25 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=25 pruub=10.968404770s) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active pruub 34.490970612s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:40 compute-0 ceph-mon[74966]: 2.1 scrub starts
Nov 26 12:38:40 compute-0 ceph-mon[74966]: 2.1 scrub ok
Nov 26 12:38:40 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:40 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:40 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/941757284' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 26 12:38:40 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 25 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=25 pruub=10.968404770s) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown pruub 34.490970612s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:40 compute-0 systemd[1]: libpod-6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769.scope: Deactivated successfully.
Nov 26 12:38:40 compute-0 podman[94684]: 2025-11-26 12:38:40.285328647 +0000 UTC m=+0.722730889 container died 6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769 (image=quay.io/ceph/ceph:v18, name=vigorous_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 12:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-65589fb9917e834e7b0d837d487335992727f67616120bc82ca66320e774cd45-merged.mount: Deactivated successfully.
Nov 26 12:38:40 compute-0 podman[94684]: 2025-11-26 12:38:40.306596696 +0000 UTC m=+0.743998938 container remove 6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769 (image=quay.io/ceph/ceph:v18, name=vigorous_hellman, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:40 compute-0 systemd[1]: libpod-conmon-6c2809d9c6a5528cd501031864a35930735688c719acd5deb48da19ebfc7b769.scope: Deactivated successfully.
Nov 26 12:38:40 compute-0 sudo[94681]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:40 compute-0 sudo[94754]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofljpjfalhqwjwvcoctkckerqcxrcsdz ; /usr/bin/python3'
Nov 26 12:38:40 compute-0 sudo[94754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:40 compute-0 python3[94756]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:40 compute-0 podman[94757]: 2025-11-26 12:38:40.549421013 +0000 UTC m=+0.028324111 container create a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38 (image=quay.io/ceph/ceph:v18, name=thirsty_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:40 compute-0 systemd[1]: Started libpod-conmon-a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38.scope.
Nov 26 12:38:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64a4ff336334a11163669435b58f8cea2335f903761bd045f11f568341658f58/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64a4ff336334a11163669435b58f8cea2335f903761bd045f11f568341658f58/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:40 compute-0 podman[94757]: 2025-11-26 12:38:40.605527578 +0000 UTC m=+0.084430695 container init a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38 (image=quay.io/ceph/ceph:v18, name=thirsty_margulis, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:40 compute-0 podman[94757]: 2025-11-26 12:38:40.609447485 +0000 UTC m=+0.088350582 container start a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38 (image=quay.io/ceph/ceph:v18, name=thirsty_margulis, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 12:38:40 compute-0 podman[94757]: 2025-11-26 12:38:40.611946252 +0000 UTC m=+0.090849349 container attach a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38 (image=quay.io/ceph/ceph:v18, name=thirsty_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 12:38:40 compute-0 podman[94757]: 2025-11-26 12:38:40.53765964 +0000 UTC m=+0.016562747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:40 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=25 pruub=11.576715469s) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active pruub 28.454172134s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:40 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=25 pruub=11.576715469s) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown pruub 28.454172134s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:40 compute-0 ceph-mgr[75236]: [progress INFO root] Writing back 7 completed events
Nov 26 12:38:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 12:38:40 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:38:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 26 12:38:41 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/776129414' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 26 12:38:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 26 12:38:41 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/776129414' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 26 12:38:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 26 12:38:41 compute-0 thirsty_margulis[94769]: enabled application 'rbd' on pool 'images'
Nov 26 12:38:41 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1f( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1e( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1d( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1c( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.7( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.b( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.6( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1b( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.a( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.5( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1a( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.9( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.4( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.19( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.3( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.2( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.c( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.d( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.e( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.8( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.f( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.10( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.11( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.12( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.13( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.14( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.15( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.16( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.17( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.18( empty local-lis/les=19/20 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1d( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1e( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-mon[74966]: pgmap v47: 69 pgs: 1 peering, 32 unknown, 36 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:41 compute-0 ceph-mon[74966]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1f( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.10( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:38:41 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/941757284' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.11( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-mon[74966]: osdmap e25: 3 total, 3 up, 3 in
Nov 26 12:38:41 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.12( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/776129414' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.13( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.14( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.15( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.16( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.17( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.8( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.9( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.a( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.b( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.7( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.6( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1c( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.5( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.4( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.3( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.2( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.f( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.e( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.d( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.c( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1b( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1a( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.19( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.18( empty local-lis/les=20/21 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1f( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1e( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1d( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.7( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.6( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.5( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1d( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1e( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1f( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.11( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.12( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.13( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.14( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.15( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.16( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.10( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.8( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.9( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.a( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.17( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.b( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.7( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.0( empty local-lis/les=25/26 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.6( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1c( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.5( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.4( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.3( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.f( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.e( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.d( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.c( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1b( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.1a( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.19( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.18( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 26 pg[5.2( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=20/20 les/c/f=21/21/0 sis=25) [2] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 systemd[1]: libpod-a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38.scope: Deactivated successfully.
Nov 26 12:38:41 compute-0 podman[94757]: 2025-11-26 12:38:41.292749692 +0000 UTC m=+0.771652800 container died a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38 (image=quay.io/ceph/ceph:v18, name=thirsty_margulis, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.9( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.4( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.19( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.3( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.1( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.0( empty local-lis/les=25/26 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.c( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.d( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.e( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.f( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.8( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.10( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.11( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.12( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.13( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.16( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.15( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.17( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 26 pg[4.18( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=19/19 les/c/f=20/20/0 sis=25) [0] r=0 lpr=25 pi=[19,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-64a4ff336334a11163669435b58f8cea2335f903761bd045f11f568341658f58-merged.mount: Deactivated successfully.
Nov 26 12:38:41 compute-0 podman[94757]: 2025-11-26 12:38:41.31394842 +0000 UTC m=+0.792851517 container remove a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38 (image=quay.io/ceph/ceph:v18, name=thirsty_margulis, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 12:38:41 compute-0 sudo[94754]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:41 compute-0 systemd[1]: libpod-conmon-a3ddcf033281a723f129ec889417ac4bba1bf944e8d21dc1037e703bc67dee38.scope: Deactivated successfully.
Nov 26 12:38:41 compute-0 sudo[94829]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmwmoqmzqppxfezswbmgmkvsjndqcveb ; /usr/bin/python3'
Nov 26 12:38:41 compute-0 sudo[94829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:41 compute-0 python3[94831]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:41 compute-0 podman[94832]: 2025-11-26 12:38:41.559415808 +0000 UTC m=+0.028118571 container create 061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f (image=quay.io/ceph/ceph:v18, name=cool_archimedes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:41 compute-0 systemd[1]: Started libpod-conmon-061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f.scope.
Nov 26 12:38:41 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf147d2ef535ade53e6020421821c69f2532a6690f362eabef84a5c356128656/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf147d2ef535ade53e6020421821c69f2532a6690f362eabef84a5c356128656/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:41 compute-0 podman[94832]: 2025-11-26 12:38:41.610347311 +0000 UTC m=+0.079050074 container init 061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f (image=quay.io/ceph/ceph:v18, name=cool_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 12:38:41 compute-0 podman[94832]: 2025-11-26 12:38:41.615006356 +0000 UTC m=+0.083709119 container start 061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f (image=quay.io/ceph/ceph:v18, name=cool_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:41 compute-0 podman[94832]: 2025-11-26 12:38:41.61619764 +0000 UTC m=+0.084900403 container attach 061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f (image=quay.io/ceph/ceph:v18, name=cool_archimedes, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 12:38:41 compute-0 podman[94832]: 2025-11-26 12:38:41.547971335 +0000 UTC m=+0.016674128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:41 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v50: 131 pgs: 1 peering, 62 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:42 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 26 12:38:42 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1601133733' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 26 12:38:42 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 26 12:38:42 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1601133733' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 26 12:38:42 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 26 12:38:42 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/776129414' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 26 12:38:42 compute-0 ceph-mon[74966]: osdmap e26: 3 total, 3 up, 3 in
Nov 26 12:38:42 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1601133733' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 26 12:38:42 compute-0 cool_archimedes[94844]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 26 12:38:42 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 26 12:38:42 compute-0 systemd[1]: libpod-061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f.scope: Deactivated successfully.
Nov 26 12:38:42 compute-0 conmon[94844]: conmon 061d1dda6ea6cbd95273 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f.scope/container/memory.events
Nov 26 12:38:42 compute-0 podman[94870]: 2025-11-26 12:38:42.335328054 +0000 UTC m=+0.015425646 container died 061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f (image=quay.io/ceph/ceph:v18, name=cool_archimedes, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 12:38:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf147d2ef535ade53e6020421821c69f2532a6690f362eabef84a5c356128656-merged.mount: Deactivated successfully.
Nov 26 12:38:42 compute-0 podman[94870]: 2025-11-26 12:38:42.356513628 +0000 UTC m=+0.036611219 container remove 061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f (image=quay.io/ceph/ceph:v18, name=cool_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:42 compute-0 systemd[1]: libpod-conmon-061d1dda6ea6cbd952739fe97c6f654070bee8b01e5c04381a04f698a4dc935f.scope: Deactivated successfully.
Nov 26 12:38:42 compute-0 sudo[94829]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:42 compute-0 sudo[94905]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joezummiwntxsyhwdhsdojaejzhxacjg ; /usr/bin/python3'
Nov 26 12:38:42 compute-0 sudo[94905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:42 compute-0 python3[94907]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:42 compute-0 podman[94908]: 2025-11-26 12:38:42.608417343 +0000 UTC m=+0.026543020 container create d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e (image=quay.io/ceph/ceph:v18, name=magical_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 26 12:38:42 compute-0 systemd[1]: Started libpod-conmon-d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e.scope.
Nov 26 12:38:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc2362f292d6ce376e9400d508dc54eaec3fffc6134a3617c68bec85f13aae8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc2362f292d6ce376e9400d508dc54eaec3fffc6134a3617c68bec85f13aae8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:42 compute-0 podman[94908]: 2025-11-26 12:38:42.663164897 +0000 UTC m=+0.081290594 container init d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e (image=quay.io/ceph/ceph:v18, name=magical_moore, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 12:38:42 compute-0 podman[94908]: 2025-11-26 12:38:42.667008919 +0000 UTC m=+0.085134596 container start d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e (image=quay.io/ceph/ceph:v18, name=magical_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:42 compute-0 podman[94908]: 2025-11-26 12:38:42.668128908 +0000 UTC m=+0.086254586 container attach d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e (image=quay.io/ceph/ceph:v18, name=magical_moore, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:42 compute-0 podman[94908]: 2025-11-26 12:38:42.598101044 +0000 UTC m=+0.016226742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:42 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 26 12:38:42 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 26 12:38:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 26 12:38:43 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2492885918' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 26 12:38:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 26 12:38:43 compute-0 ceph-mon[74966]: pgmap v50: 131 pgs: 1 peering, 62 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:43 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1601133733' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 26 12:38:43 compute-0 ceph-mon[74966]: osdmap e27: 3 total, 3 up, 3 in
Nov 26 12:38:43 compute-0 ceph-mon[74966]: 2.2 scrub starts
Nov 26 12:38:43 compute-0 ceph-mon[74966]: 2.2 scrub ok
Nov 26 12:38:43 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2492885918' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 26 12:38:43 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2492885918' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 26 12:38:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 26 12:38:43 compute-0 magical_moore[94920]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 26 12:38:43 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts
Nov 26 12:38:43 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 26 12:38:43 compute-0 systemd[1]: libpod-d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e.scope: Deactivated successfully.
Nov 26 12:38:43 compute-0 podman[94908]: 2025-11-26 12:38:43.312152361 +0000 UTC m=+0.730278058 container died d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e (image=quay.io/ceph/ceph:v18, name=magical_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:43 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok
Nov 26 12:38:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-adc2362f292d6ce376e9400d508dc54eaec3fffc6134a3617c68bec85f13aae8-merged.mount: Deactivated successfully.
Nov 26 12:38:43 compute-0 podman[94908]: 2025-11-26 12:38:43.333352331 +0000 UTC m=+0.751478008 container remove d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e (image=quay.io/ceph/ceph:v18, name=magical_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 12:38:43 compute-0 systemd[1]: libpod-conmon-d3c3f78321190430344a21a8072def2df5c8351bb7ee18e6abc6b91487e71a3e.scope: Deactivated successfully.
Nov 26 12:38:43 compute-0 sudo[94905]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:43 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 26 12:38:43 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 26 12:38:43 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v53: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 12:38:43 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 12:38:43 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 12:38:43 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 12:38:43 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:43 compute-0 python3[95030]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:38:44 compute-0 python3[95101]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160723.7980077-37044-39369632318689/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:38:44 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 26 12:38:44 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 26 12:38:44 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 26 12:38:44 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:38:44 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:38:44 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:38:44 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:38:44 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 26 12:38:44 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1e( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988976479s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505214691s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.966246605s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482488632s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.966189384s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482465744s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.966205597s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482488632s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.966147423s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482465744s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.966093063s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482452393s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1d( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988794327s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505176544s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1d( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988755226s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505176544s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.966007233s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482452393s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.16( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965889931s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482442856s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.16( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965874672s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482442856s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.11( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988616943s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505237579s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.11( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988597870s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505237579s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.11( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1e( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988555908s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505214691s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.17( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965748787s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482437134s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.13( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965732574s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482437134s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.12( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988538742s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505271912s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.12( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988524437s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505271912s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.12( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.13( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988522530s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505287170s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.16( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.13( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988505363s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505287170s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965608597s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482414246s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965594292s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482414246s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.14( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988453865s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505287170s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.15( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988427162s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505290985s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.14( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988440514s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505287170s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.15( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988411903s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505290985s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.16( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988368988s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505310059s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.16( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988312721s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505310059s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.17( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965442657s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482463837s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.9( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988078117s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505344391s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.17( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965351105s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482463837s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.11( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965111732s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482398987s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.9( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.3( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.5( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.9( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.988045692s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505344391s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.11( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.965085030s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482398987s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.964999199s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482330322s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.7( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.964978218s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482330322s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.7( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987951279s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505378723s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.7( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987937927s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505378723s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.2( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.961322784s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478792191s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.2( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.961305618s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478792191s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.5( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987884521s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505397797s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.1( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.3( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.961241722s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478790283s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.5( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987843513s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505397797s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.3( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.961225510s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478790283s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2492885918' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.4( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987816811s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505416870s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-mon[74966]: 3.1 deep-scrub starts
Nov 26 12:38:44 compute-0 ceph-mon[74966]: osdmap e28: 3 total, 3 up, 3 in
Nov 26 12:38:44 compute-0 ceph-mon[74966]: 3.1 deep-scrub ok
Nov 26 12:38:44 compute-0 ceph-mon[74966]: 2.3 scrub starts
Nov 26 12:38:44 compute-0 ceph-mon[74966]: 2.3 scrub ok
Nov 26 12:38:44 compute-0 ceph-mon[74966]: pgmap v53: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:44 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:44 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:44 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.f( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.c( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.4( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987801552s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505416870s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.1d( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.7( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.964609146s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482233047s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.7( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.964589119s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482233047s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.961126328s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478786469s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.1a( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.961111069s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478786469s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.964792252s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482503891s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.18( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.3( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987697601s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505424500s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.964778900s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482503891s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.3( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987675667s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505424500s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[5.19( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.2( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987970352s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505790710s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.2( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987954140s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505790710s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960905075s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478773117s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987540245s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505432129s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960890770s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478773117s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955610275s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199962616s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955582619s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199962616s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987526894s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505432129s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955240250s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199695587s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955218315s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199695587s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955148697s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199703217s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955130577s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199703217s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.8( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960819244s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478759766s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955301285s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199954987s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955283165s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199954987s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.f( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987499237s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505443573s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955018044s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199752808s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955002785s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199752808s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960766792s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478731155s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955121994s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199932098s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955104828s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199932098s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.f( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987483025s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505443573s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.8( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955128670s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.200031281s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.8( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.955093384s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.200031281s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960752487s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478731155s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.7( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954947472s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199966431s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.7( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954934120s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199966431s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960717201s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478717804s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.6( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954885483s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199993134s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.6( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954867363s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199993134s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.8( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960759163s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478759766s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954810143s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.199996948s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954795837s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.199996948s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960703850s) [1] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478717804s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954736710s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.200008392s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954721451s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.200008392s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960652351s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478710175s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954665184s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.200012207s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.1( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954650879s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.200012207s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960639000s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478710175s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.c( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987387657s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505470276s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954570770s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.200042725s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.954553604s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.200042725s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.c( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.987366676s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505470276s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.958213806s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.203784943s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.958196640s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.203784943s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960472107s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478660583s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957991600s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.203651428s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957976341s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.203651428s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957974434s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.203727722s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957956314s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.203727722s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960059166s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478660583s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.12( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.958221436s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.204078674s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.12( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.958209991s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.204078674s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1a( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.986842155s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505485535s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960030556s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.478685379s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.960004807s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.478685379s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.958020210s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.203979492s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.958008766s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.203979492s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957818985s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.203834534s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.18( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957925797s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.203968048s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.17( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957794189s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 35.203834534s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957797050s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.203834534s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.17( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957764626s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.203834534s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[3.18( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.957838058s) [2] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 35.203968048s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.963511467s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482202530s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.11( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.963496208s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482202530s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.19( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.986742973s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505485535s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.13( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.1a( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.986726761s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505485535s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.18( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.986721039s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 33.505496979s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.19( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.986720085s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505485535s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[5.18( empty local-lis/les=25/26 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.986707687s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 33.505496979s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.14( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.963317871s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 31.482355118s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=10.963282585s) [0] r=-1 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 31.482355118s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.15( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.1e( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.1d( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.16( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.8( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.8( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.7( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.b( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.3( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.2( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.1f( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.5( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.e( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.11( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.16( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[3.18( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.2( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.5( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.f( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.1c( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.4( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.1d( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.7( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.18( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[2.19( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[5.1e( empty local-lis/les=0/0 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.1f( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.1b( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.8( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.977860451s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548255920s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.973085403s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.543510437s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.8( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.977839470s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548255920s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.973063469s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.543510437s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.7( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972986221s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.543521881s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.7( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972971916s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.543521881s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972811699s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.543590546s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.5( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972809792s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.543621063s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1b( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972784042s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.543590546s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.5( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972793579s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.543621063s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972681999s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.543605804s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972705841s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.543632507s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972665787s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.543605804s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1a( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.972690582s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.543632507s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.4( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.977067947s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548099518s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.4( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.977055550s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548099518s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.977051735s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548171997s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.1( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.977040291s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548171997s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976967812s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548179626s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976955414s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548179626s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.9( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976746559s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548088074s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.9( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976715088s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548088074s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.e( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976774216s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548225403s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.e( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976761818s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548225403s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.d( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976682663s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548217773s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.d( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976669312s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548217773s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.10( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976590157s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548267365s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.f( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976564407s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548236847s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.10( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976575851s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548267365s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.f( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976531029s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548236847s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.11( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976515770s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548278809s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.11( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976505280s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548278809s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.12( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976527214s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548336029s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.12( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976513863s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548336029s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.13( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976540565s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548381805s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.13( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976528168s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548381805s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.18( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976535797s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548473358s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.18( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.976524353s) [2] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548473358s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.6( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.1( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.975943565s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 40.548412323s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=12.975918770s) [1] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 40.548412323s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.12( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.7( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 29 pg[3.17( empty local-lis/les=0/0 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.8( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.5( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.4( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.2( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.9( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.d( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.10( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.f( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.12( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 29 pg[4.14( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.1c( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.1b( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.a( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.1( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.e( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.11( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.13( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.18( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 29 pg[4.1a( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:38:44 compute-0 sudo[95201]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdnyzbhszpbqxnqoxudqvnjpetcljlyy ; /usr/bin/python3'
Nov 26 12:38:44 compute-0 sudo[95201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:44 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 26 12:38:44 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 26 12:38:44 compute-0 python3[95203]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:38:44 compute-0 sudo[95201]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:44 compute-0 sudo[95276]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoarnrxknuzxcduhsjyuiuwmkldnisvf ; /usr/bin/python3'
Nov 26 12:38:44 compute-0 sudo[95276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:44 compute-0 python3[95278]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160724.4477854-37058-196574120893949/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=f59bc0653853925cdc06336edac42275833fbc2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:38:44 compute-0 sudo[95276]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:45 compute-0 sudo[95326]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxlaboryxgjrqimhgdondkmvwhtcrwlc ; /usr/bin/python3'
Nov 26 12:38:45 compute-0 sudo[95326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:45 compute-0 python3[95328]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:45 compute-0 podman[95329]: 2025-11-26 12:38:45.226872608 +0000 UTC m=+0.028325623 container create 80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c (image=quay.io/ceph/ceph:v18, name=mystifying_archimedes, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:45 compute-0 systemd[1]: Started libpod-conmon-80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c.scope.
Nov 26 12:38:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:45 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 26 12:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c32e4d56ee0e41c97efed3e6ec26ddfd1086a37315ef9460ba8c514950bc34d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c32e4d56ee0e41c97efed3e6ec26ddfd1086a37315ef9460ba8c514950bc34d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c32e4d56ee0e41c97efed3e6ec26ddfd1086a37315ef9460ba8c514950bc34d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:45 compute-0 podman[95329]: 2025-11-26 12:38:45.283699957 +0000 UTC m=+0.085152972 container init 80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c (image=quay.io/ceph/ceph:v18, name=mystifying_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 12:38:45 compute-0 podman[95329]: 2025-11-26 12:38:45.288841444 +0000 UTC m=+0.090294459 container start 80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c (image=quay.io/ceph/ceph:v18, name=mystifying_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 12:38:45 compute-0 podman[95329]: 2025-11-26 12:38:45.290087512 +0000 UTC m=+0.091540526 container attach 80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c (image=quay.io/ceph/ceph:v18, name=mystifying_archimedes, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 12:38:45 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 26 12:38:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 26 12:38:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 26 12:38:45 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.1c( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.18( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.13( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.11( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.11( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.16( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.1( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.a( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.5( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.7( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.e( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.8( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.1a( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.1d( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.18( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.1e( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[4.1b( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [2] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 30 pg[3.e( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [2] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-mon[74966]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 26 12:38:45 compute-0 ceph-mon[74966]: Cluster is now healthy
Nov 26 12:38:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:38:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:38:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:38:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:38:45 compute-0 ceph-mon[74966]: osdmap e29: 3 total, 3 up, 3 in
Nov 26 12:38:45 compute-0 ceph-mon[74966]: 4.3 scrub starts
Nov 26 12:38:45 compute-0 ceph-mon[74966]: 4.3 scrub ok
Nov 26 12:38:45 compute-0 ceph-mon[74966]: osdmap e30: 3 total, 3 up, 3 in
Nov 26 12:38:45 compute-0 podman[95329]: 2025-11-26 12:38:45.215434817 +0000 UTC m=+0.016887853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.19( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.18( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.1a( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.f( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.1d( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.9( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.1e( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.19( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.c( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.4( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.5( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.a( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.9( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.3( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.16( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.d( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.15( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.12( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.13( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.17( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.11( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.1b( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.10( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.12( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.14( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.8( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.9( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.5( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[5.1( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.7( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.2( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.4( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.f( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[4.d( empty local-lis/les=29/30 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.6( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 30 pg[2.7( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [1] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.18( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.1d( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.7( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.1c( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.f( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.5( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.2( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.1f( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.2( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.3( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.b( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.16( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.8( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.13( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[2.11( empty local-lis/les=29/30 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.17( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.15( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.12( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.15( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.1f( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.f( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[5.14( empty local-lis/les=29/30 n=0 ec=25/20 lis/c=25/25 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.c( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.1( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.9( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.3( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.a( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.6( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 30 pg[3.1b( empty local-lis/les=29/30 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:38:45 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 26 12:38:45 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 26 12:38:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 26 12:38:45 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1241135943' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 12:38:45 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1241135943' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 12:38:45 compute-0 mystifying_archimedes[95341]: 
Nov 26 12:38:45 compute-0 mystifying_archimedes[95341]: [global]
Nov 26 12:38:45 compute-0 mystifying_archimedes[95341]:         fsid = f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:38:45 compute-0 mystifying_archimedes[95341]:         mon_host = 192.168.122.100
Nov 26 12:38:45 compute-0 systemd[1]: libpod-80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c.scope: Deactivated successfully.
Nov 26 12:38:45 compute-0 conmon[95341]: conmon 80cfd4b7c3f9b5966c8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c.scope/container/memory.events
Nov 26 12:38:45 compute-0 podman[95329]: 2025-11-26 12:38:45.739460612 +0000 UTC m=+0.540913626 container died 80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c (image=quay.io/ceph/ceph:v18, name=mystifying_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c32e4d56ee0e41c97efed3e6ec26ddfd1086a37315ef9460ba8c514950bc34d-merged.mount: Deactivated successfully.
Nov 26 12:38:45 compute-0 podman[95329]: 2025-11-26 12:38:45.763725292 +0000 UTC m=+0.565178307 container remove 80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c (image=quay.io/ceph/ceph:v18, name=mystifying_archimedes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 12:38:45 compute-0 systemd[1]: libpod-conmon-80cfd4b7c3f9b5966c8aa1e74744a25c19d167d774679ff4047baec4cc9f1f0c.scope: Deactivated successfully.
Nov 26 12:38:45 compute-0 sudo[95326]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:45 compute-0 sudo[95366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:45 compute-0 sudo[95366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:45 compute-0 sudo[95366]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:45 compute-0 sudo[95401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:45 compute-0 sudo[95401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:45 compute-0 sudo[95401]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:45 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v56: 131 pgs: 34 peering, 97 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:45 compute-0 sudo[95426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:45 compute-0 sudo[95426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:45 compute-0 sudo[95426]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:45 compute-0 sudo[95473]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwlceunmfoavmyrgpslnnfdoxemitgxs ; /usr/bin/python3'
Nov 26 12:38:45 compute-0 sudo[95473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:45 compute-0 sudo[95475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 12:38:45 compute-0 sudo[95475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:38:45 compute-0 python3[95480]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:46 compute-0 podman[95502]: 2025-11-26 12:38:46.035354087 +0000 UTC m=+0.030662794 container create 7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d (image=quay.io/ceph/ceph:v18, name=ecstatic_benz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:46 compute-0 systemd[1]: Started libpod-conmon-7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d.scope.
Nov 26 12:38:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69f35def29e68c5c1978e0e8af78345226906e89e8cfb79334e0b19d05286d8d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69f35def29e68c5c1978e0e8af78345226906e89e8cfb79334e0b19d05286d8d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69f35def29e68c5c1978e0e8af78345226906e89e8cfb79334e0b19d05286d8d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:46 compute-0 podman[95502]: 2025-11-26 12:38:46.092221571 +0000 UTC m=+0.087530289 container init 7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d (image=quay.io/ceph/ceph:v18, name=ecstatic_benz, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:46 compute-0 podman[95502]: 2025-11-26 12:38:46.097650933 +0000 UTC m=+0.092959642 container start 7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d (image=quay.io/ceph/ceph:v18, name=ecstatic_benz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:46 compute-0 podman[95502]: 2025-11-26 12:38:46.100151945 +0000 UTC m=+0.095460653 container attach 7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d (image=quay.io/ceph/ceph:v18, name=ecstatic_benz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:46 compute-0 podman[95502]: 2025-11-26 12:38:46.021916473 +0000 UTC m=+0.017225191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:46 compute-0 podman[95575]: 2025-11-26 12:38:46.286520407 +0000 UTC m=+0.040482284 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:46 compute-0 ceph-mon[74966]: 3.2 scrub starts
Nov 26 12:38:46 compute-0 ceph-mon[74966]: 3.2 scrub ok
Nov 26 12:38:46 compute-0 ceph-mon[74966]: 4.6 scrub starts
Nov 26 12:38:46 compute-0 ceph-mon[74966]: 4.6 scrub ok
Nov 26 12:38:46 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1241135943' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 12:38:46 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1241135943' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 12:38:46 compute-0 ceph-mon[74966]: pgmap v56: 131 pgs: 34 peering, 97 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:46 compute-0 podman[95575]: 2025-11-26 12:38:46.367693688 +0000 UTC m=+0.121655555 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:46 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 26 12:38:46 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 26 12:38:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 26 12:38:46 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3736164057' entity='client.admin' 
Nov 26 12:38:46 compute-0 ecstatic_benz[95532]: set ssl_option
Nov 26 12:38:46 compute-0 systemd[1]: libpod-7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d.scope: Deactivated successfully.
Nov 26 12:38:46 compute-0 podman[95502]: 2025-11-26 12:38:46.618953695 +0000 UTC m=+0.614262414 container died 7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d (image=quay.io/ceph/ceph:v18, name=ecstatic_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 12:38:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-69f35def29e68c5c1978e0e8af78345226906e89e8cfb79334e0b19d05286d8d-merged.mount: Deactivated successfully.
Nov 26 12:38:46 compute-0 podman[95502]: 2025-11-26 12:38:46.641488201 +0000 UTC m=+0.636796910 container remove 7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d (image=quay.io/ceph/ceph:v18, name=ecstatic_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:46 compute-0 systemd[1]: libpod-conmon-7d889942ee03894b1830203467bc2fb992e3e9890c0f4f00cfa70a1c7d02517d.scope: Deactivated successfully.
Nov 26 12:38:46 compute-0 sudo[95473]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:46 compute-0 sudo[95475]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:38:46 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:38:46 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:38:46 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:38:46 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:38:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:38:46 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:46 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 65b822e2-2cc8-4854-aa1f-a77e3f35e3c2 does not exist
Nov 26 12:38:46 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 35931af2-2cae-4920-86f0-5af614a04373 does not exist
Nov 26 12:38:46 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 631a5cf6-93a7-4ed3-8901-d8979f803a63 does not exist
Nov 26 12:38:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:38:46 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:38:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:38:46 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:38:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:38:46 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:46 compute-0 sudo[95705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:46 compute-0 sudo[95705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:46 compute-0 sudo[95705]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:46 compute-0 sudo[95753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikfazcmaabngfiotvuaykvqltavmsvlt ; /usr/bin/python3'
Nov 26 12:38:46 compute-0 sudo[95753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:46 compute-0 sudo[95754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:46 compute-0 sudo[95754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:46 compute-0 sudo[95754]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:46 compute-0 sudo[95781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:46 compute-0 sudo[95781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:46 compute-0 sudo[95781]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:46 compute-0 sudo[95806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:38:46 compute-0 sudo[95806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:46 compute-0 python3[95763]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:46 compute-0 podman[95831]: 2025-11-26 12:38:46.920097694 +0000 UTC m=+0.027538303 container create bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020 (image=quay.io/ceph/ceph:v18, name=optimistic_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 12:38:46 compute-0 systemd[1]: Started libpod-conmon-bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020.scope.
Nov 26 12:38:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37c65679871074fd5ba7fa6c20af4bb15e102a99666cab40683b86549746126/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37c65679871074fd5ba7fa6c20af4bb15e102a99666cab40683b86549746126/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37c65679871074fd5ba7fa6c20af4bb15e102a99666cab40683b86549746126/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:46 compute-0 podman[95831]: 2025-11-26 12:38:46.972107588 +0000 UTC m=+0.079548207 container init bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020 (image=quay.io/ceph/ceph:v18, name=optimistic_easley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 12:38:46 compute-0 podman[95831]: 2025-11-26 12:38:46.979538086 +0000 UTC m=+0.086978686 container start bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020 (image=quay.io/ceph/ceph:v18, name=optimistic_easley, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 12:38:46 compute-0 podman[95831]: 2025-11-26 12:38:46.980665009 +0000 UTC m=+0.088105618 container attach bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020 (image=quay.io/ceph/ceph:v18, name=optimistic_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 26 12:38:47 compute-0 podman[95831]: 2025-11-26 12:38:46.909724588 +0000 UTC m=+0.017165207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:47 compute-0 podman[95879]: 2025-11-26 12:38:47.111562719 +0000 UTC m=+0.026427272 container create 72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williams, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:47 compute-0 systemd[1]: Started libpod-conmon-72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c.scope.
Nov 26 12:38:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:47 compute-0 podman[95879]: 2025-11-26 12:38:47.172116979 +0000 UTC m=+0.086981542 container init 72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williams, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:47 compute-0 podman[95879]: 2025-11-26 12:38:47.176273924 +0000 UTC m=+0.091138476 container start 72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:47 compute-0 podman[95879]: 2025-11-26 12:38:47.177428028 +0000 UTC m=+0.092292600 container attach 72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 12:38:47 compute-0 adoring_williams[95892]: 167 167
Nov 26 12:38:47 compute-0 systemd[1]: libpod-72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c.scope: Deactivated successfully.
Nov 26 12:38:47 compute-0 podman[95879]: 2025-11-26 12:38:47.179474178 +0000 UTC m=+0.094338851 container died 72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-00197932d927709c508c8fc3970a45be4c9c1ef084f0e7f0c797526f5cb81151-merged.mount: Deactivated successfully.
Nov 26 12:38:47 compute-0 podman[95879]: 2025-11-26 12:38:47.100608804 +0000 UTC m=+0.015473377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:47 compute-0 podman[95879]: 2025-11-26 12:38:47.199260776 +0000 UTC m=+0.114125328 container remove 72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williams, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 12:38:47 compute-0 systemd[1]: libpod-conmon-72bc072d911232c3e72fcc1058d25c66b8cf0bbfb4f676fb79823d4b119e6b5c.scope: Deactivated successfully.
Nov 26 12:38:47 compute-0 podman[95932]: 2025-11-26 12:38:47.310818704 +0000 UTC m=+0.027075188 container create 982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:47 compute-0 systemd[1]: Started libpod-conmon-982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a.scope.
Nov 26 12:38:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14957c928b0f8ddb0de9d91276db324c787c950708c1b7475f84b824c929936/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14957c928b0f8ddb0de9d91276db324c787c950708c1b7475f84b824c929936/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14957c928b0f8ddb0de9d91276db324c787c950708c1b7475f84b824c929936/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14957c928b0f8ddb0de9d91276db324c787c950708c1b7475f84b824c929936/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14957c928b0f8ddb0de9d91276db324c787c950708c1b7475f84b824c929936/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:47 compute-0 podman[95932]: 2025-11-26 12:38:47.372146237 +0000 UTC m=+0.088402720 container init 982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 12:38:47 compute-0 podman[95932]: 2025-11-26 12:38:47.376643245 +0000 UTC m=+0.092899729 container start 982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 12:38:47 compute-0 podman[95932]: 2025-11-26 12:38:47.377991466 +0000 UTC m=+0.094247950 container attach 982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 12:38:47 compute-0 podman[95932]: 2025-11-26 12:38:47.300016847 +0000 UTC m=+0.016273331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:47 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:38:47 compute-0 ceph-mgr[75236]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 26 12:38:47 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 26 12:38:47 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 12:38:47 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:47 compute-0 optimistic_easley[95844]: Scheduled rgw.rgw update...
Nov 26 12:38:47 compute-0 systemd[1]: libpod-bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020.scope: Deactivated successfully.
Nov 26 12:38:47 compute-0 conmon[95844]: conmon bda7263d224fb9c281fd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020.scope/container/memory.events
Nov 26 12:38:47 compute-0 podman[95952]: 2025-11-26 12:38:47.490211328 +0000 UTC m=+0.018775545 container died bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020 (image=quay.io/ceph/ceph:v18, name=optimistic_easley, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:47 compute-0 podman[95952]: 2025-11-26 12:38:47.509822011 +0000 UTC m=+0.038386208 container remove bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020 (image=quay.io/ceph/ceph:v18, name=optimistic_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:47 compute-0 systemd[1]: libpod-conmon-bda7263d224fb9c281fd185a4d85a0d75d442b13b06d7e231aa35cfb692f4020.scope: Deactivated successfully.
Nov 26 12:38:47 compute-0 sudo[95753]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:47 compute-0 ceph-mon[74966]: 4.b scrub starts
Nov 26 12:38:47 compute-0 ceph-mon[74966]: 4.b scrub ok
Nov 26 12:38:47 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3736164057' entity='client.admin' 
Nov 26 12:38:47 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:47 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:47 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:47 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:38:47 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:47 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:38:47 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:38:47 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:47 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e37c65679871074fd5ba7fa6c20af4bb15e102a99666cab40683b86549746126-merged.mount: Deactivated successfully.
Nov 26 12:38:47 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v57: 131 pgs: 34 peering, 97 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:48 compute-0 cool_rubin[95945]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:38:48 compute-0 cool_rubin[95945]: --> relative data size: 1.0
Nov 26 12:38:48 compute-0 cool_rubin[95945]: --> All data devices are unavailable
Nov 26 12:38:48 compute-0 systemd[1]: libpod-982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a.scope: Deactivated successfully.
Nov 26 12:38:48 compute-0 podman[95932]: 2025-11-26 12:38:48.207433622 +0000 UTC m=+0.923690106 container died 982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 12:38:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a14957c928b0f8ddb0de9d91276db324c787c950708c1b7475f84b824c929936-merged.mount: Deactivated successfully.
Nov 26 12:38:48 compute-0 python3[96058]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:38:48 compute-0 podman[95932]: 2025-11-26 12:38:48.25478933 +0000 UTC m=+0.971045814 container remove 982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 26 12:38:48 compute-0 systemd[1]: libpod-conmon-982e5212efe82a270424e8cb6e22f7dfe71c709c3d24769d89e0fec1e32f0d8a.scope: Deactivated successfully.
Nov 26 12:38:48 compute-0 sudo[95806]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:48 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 26 12:38:48 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 26 12:38:48 compute-0 sudo[96091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:48 compute-0 sudo[96091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:48 compute-0 sudo[96091]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:48 compute-0 sudo[96144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:48 compute-0 sudo[96144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:48 compute-0 sudo[96144]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:48 compute-0 sudo[96193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:48 compute-0 sudo[96193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:48 compute-0 sudo[96193]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:48 compute-0 sudo[96219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:38:48 compute-0 sudo[96219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:48 compute-0 python3[96196]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160728.0442-37099-40743055204557/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:38:48 compute-0 ceph-mon[74966]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:38:48 compute-0 ceph-mon[74966]: Saving service rgw.rgw spec with placement compute-0
Nov 26 12:38:48 compute-0 ceph-mon[74966]: pgmap v57: 131 pgs: 34 peering, 97 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:48 compute-0 podman[96297]: 2025-11-26 12:38:48.676826931 +0000 UTC m=+0.025504816 container create b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_agnesi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 12:38:48 compute-0 systemd[1]: Started libpod-conmon-b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200.scope.
Nov 26 12:38:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:48 compute-0 podman[96297]: 2025-11-26 12:38:48.720717723 +0000 UTC m=+0.069395618 container init b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_agnesi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 12:38:48 compute-0 podman[96297]: 2025-11-26 12:38:48.726456 +0000 UTC m=+0.075133885 container start b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_agnesi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 12:38:48 compute-0 podman[96297]: 2025-11-26 12:38:48.727574035 +0000 UTC m=+0.076251930 container attach b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 12:38:48 compute-0 strange_agnesi[96315]: 167 167
Nov 26 12:38:48 compute-0 systemd[1]: libpod-b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200.scope: Deactivated successfully.
Nov 26 12:38:48 compute-0 conmon[96315]: conmon b008384995b5a9d28264 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200.scope/container/memory.events
Nov 26 12:38:48 compute-0 podman[96297]: 2025-11-26 12:38:48.730498438 +0000 UTC m=+0.079176323 container died b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_agnesi, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:48 compute-0 sudo[96338]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wafhljmfrjvmtbkkumlfxjrysbxbuebs ; /usr/bin/python3'
Nov 26 12:38:48 compute-0 sudo[96338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-62bc359fdcf477465e5818463bbced9d1256fc2711aec9341d2b38bac8659573-merged.mount: Deactivated successfully.
Nov 26 12:38:48 compute-0 podman[96297]: 2025-11-26 12:38:48.749403277 +0000 UTC m=+0.098081162 container remove b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 12:38:48 compute-0 podman[96297]: 2025-11-26 12:38:48.666934795 +0000 UTC m=+0.015612710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:48 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 26 12:38:48 compute-0 systemd[1]: libpod-conmon-b008384995b5a9d28264c4c78608ce8baba454c083d92e51d23e38dc3f3e5200.scope: Deactivated successfully.
Nov 26 12:38:48 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 26 12:38:48 compute-0 podman[96359]: 2025-11-26 12:38:48.861065374 +0000 UTC m=+0.028549547 container create 14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wozniak, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:48 compute-0 python3[96343]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:48 compute-0 systemd[1]: Started libpod-conmon-14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7.scope.
Nov 26 12:38:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ad871be9c09068b741df133bfbe9e7e34fd42600e01c0070d9005b0d7867ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ad871be9c09068b741df133bfbe9e7e34fd42600e01c0070d9005b0d7867ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ad871be9c09068b741df133bfbe9e7e34fd42600e01c0070d9005b0d7867ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ad871be9c09068b741df133bfbe9e7e34fd42600e01c0070d9005b0d7867ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:48 compute-0 podman[96370]: 2025-11-26 12:38:48.909339578 +0000 UTC m=+0.029599875 container create 16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128 (image=quay.io/ceph/ceph:v18, name=sharp_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 12:38:48 compute-0 podman[96359]: 2025-11-26 12:38:48.911060285 +0000 UTC m=+0.078544469 container init 14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wozniak, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 12:38:48 compute-0 podman[96359]: 2025-11-26 12:38:48.918895438 +0000 UTC m=+0.086379612 container start 14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 12:38:48 compute-0 podman[96359]: 2025-11-26 12:38:48.919887324 +0000 UTC m=+0.087371499 container attach 14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 12:38:48 compute-0 systemd[1]: Started libpod-conmon-16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128.scope.
Nov 26 12:38:48 compute-0 podman[96359]: 2025-11-26 12:38:48.849211455 +0000 UTC m=+0.016695650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed44f283c0b7cf7d8c9d9a208e4d07a5278920343b9eb473f34da66ec4084cc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed44f283c0b7cf7d8c9d9a208e4d07a5278920343b9eb473f34da66ec4084cc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed44f283c0b7cf7d8c9d9a208e4d07a5278920343b9eb473f34da66ec4084cc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:48 compute-0 podman[96370]: 2025-11-26 12:38:48.96750599 +0000 UTC m=+0.087766296 container init 16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128 (image=quay.io/ceph/ceph:v18, name=sharp_jackson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 12:38:48 compute-0 podman[96370]: 2025-11-26 12:38:48.972006755 +0000 UTC m=+0.092267051 container start 16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128 (image=quay.io/ceph/ceph:v18, name=sharp_jackson, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:48 compute-0 podman[96370]: 2025-11-26 12:38:48.973037375 +0000 UTC m=+0.093297672 container attach 16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128 (image=quay.io/ceph/ceph:v18, name=sharp_jackson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 12:38:48 compute-0 podman[96370]: 2025-11-26 12:38:48.897323584 +0000 UTC m=+0.017583880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:49 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 26 12:38:49 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 26 12:38:49 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:38:49 compute-0 ceph-mgr[75236]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 26 12:38:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 26 12:38:49 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 26 12:38:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 26 12:38:49 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 26 12:38:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 26 12:38:49 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 26 12:38:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 26 12:38:49 compute-0 ceph-mon[74966]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 26 12:38:49 compute-0 ceph-mon[74966]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 26 12:38:49 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0[74962]: 2025-11-26T12:38:49.414+0000 7f8067058640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 26 12:38:49 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 26 12:38:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).mds e2 new map
Nov 26 12:38:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-26T12:38:49.414687+0000
                                           modified        2025-11-26T12:38:49.414741+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Nov 26 12:38:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 26 12:38:49 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 26 12:38:49 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 26 12:38:49 compute-0 ceph-mgr[75236]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 26 12:38:49 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 26 12:38:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 12:38:49 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:49 compute-0 ceph-mgr[75236]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 26 12:38:49 compute-0 systemd[1]: libpod-16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128.scope: Deactivated successfully.
Nov 26 12:38:49 compute-0 podman[96370]: 2025-11-26 12:38:49.438230978 +0000 UTC m=+0.558491274 container died 16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128 (image=quay.io/ceph/ceph:v18, name=sharp_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ed44f283c0b7cf7d8c9d9a208e4d07a5278920343b9eb473f34da66ec4084cc-merged.mount: Deactivated successfully.
Nov 26 12:38:49 compute-0 podman[96370]: 2025-11-26 12:38:49.46148722 +0000 UTC m=+0.581747515 container remove 16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128 (image=quay.io/ceph/ceph:v18, name=sharp_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:49 compute-0 sudo[96338]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:49 compute-0 systemd[1]: libpod-conmon-16bc417dc694ef028f4b5fa6ffaed48184f178c2c3b0adbae71490e0c468d128.scope: Deactivated successfully.
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]: {
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:     "0": [
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:         {
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "devices": [
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "/dev/loop3"
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             ],
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "lv_name": "ceph_lv0",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "lv_size": "21470642176",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "name": "ceph_lv0",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "tags": {
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.cluster_name": "ceph",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.crush_device_class": "",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.encrypted": "0",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.osd_id": "0",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.type": "block",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.vdo": "0"
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             },
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "type": "block",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "vg_name": "ceph_vg0"
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:         }
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:     ],
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:     "1": [
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:         {
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "devices": [
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "/dev/loop4"
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             ],
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "lv_name": "ceph_lv1",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "lv_size": "21470642176",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "name": "ceph_lv1",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "tags": {
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.cluster_name": "ceph",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.crush_device_class": "",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.encrypted": "0",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.osd_id": "1",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.type": "block",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.vdo": "0"
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             },
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "type": "block",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "vg_name": "ceph_vg1"
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:         }
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:     ],
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:     "2": [
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:         {
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "devices": [
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "/dev/loop5"
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             ],
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "lv_name": "ceph_lv2",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "lv_size": "21470642176",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "name": "ceph_lv2",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "tags": {
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.cluster_name": "ceph",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.crush_device_class": "",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.encrypted": "0",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.osd_id": "2",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.type": "block",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:                 "ceph.vdo": "0"
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             },
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "type": "block",
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:             "vg_name": "ceph_vg2"
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:         }
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]:     ]
Nov 26 12:38:49 compute-0 relaxed_wozniak[96379]: }
Nov 26 12:38:49 compute-0 systemd[1]: libpod-14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7.scope: Deactivated successfully.
Nov 26 12:38:49 compute-0 podman[96359]: 2025-11-26 12:38:49.574317646 +0000 UTC m=+0.741801820 container died 14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 12:38:49 compute-0 sudo[96452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avbllxuawztrcjsagtbggyrtspgvoyky ; /usr/bin/python3'
Nov 26 12:38:49 compute-0 sudo[96452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:49 compute-0 podman[96359]: 2025-11-26 12:38:49.604613817 +0000 UTC m=+0.772097990 container remove 14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 12:38:49 compute-0 systemd[1]: libpod-conmon-14897a949240c9d3af4dc725f3f67ef863831aad2f799b42ef1227bcfc6236f7.scope: Deactivated successfully.
Nov 26 12:38:49 compute-0 ceph-mon[74966]: 3.4 scrub starts
Nov 26 12:38:49 compute-0 ceph-mon[74966]: 3.4 scrub ok
Nov 26 12:38:49 compute-0 ceph-mon[74966]: 2.c scrub starts
Nov 26 12:38:49 compute-0 ceph-mon[74966]: 2.c scrub ok
Nov 26 12:38:49 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 26 12:38:49 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 26 12:38:49 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 26 12:38:49 compute-0 ceph-mon[74966]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 26 12:38:49 compute-0 ceph-mon[74966]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 26 12:38:49 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 26 12:38:49 compute-0 ceph-mon[74966]: osdmap e31: 3 total, 3 up, 3 in
Nov 26 12:38:49 compute-0 ceph-mon[74966]: fsmap cephfs:0
Nov 26 12:38:49 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:49 compute-0 sudo[96219]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:49 compute-0 sudo[96464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:49 compute-0 sudo[96464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:49 compute-0 sudo[96464]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8ad871be9c09068b741df133bfbe9e7e34fd42600e01c0070d9005b0d7867ac-merged.mount: Deactivated successfully.
Nov 26 12:38:49 compute-0 sudo[96489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:49 compute-0 sudo[96489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:49 compute-0 sudo[96489]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:49 compute-0 python3[96462]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:49 compute-0 sudo[96514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:49 compute-0 sudo[96514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:49 compute-0 sudo[96514]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:49 compute-0 podman[96515]: 2025-11-26 12:38:49.758824295 +0000 UTC m=+0.029176413 container create 58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681 (image=quay.io/ceph/ceph:v18, name=ecstatic_ishizaka, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 12:38:49 compute-0 systemd[1]: Started libpod-conmon-58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681.scope.
Nov 26 12:38:49 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7209af3e1e406b702564d5f908c68808509ea221c196a0871ce8bb2db6cf4788/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7209af3e1e406b702564d5f908c68808509ea221c196a0871ce8bb2db6cf4788/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7209af3e1e406b702564d5f908c68808509ea221c196a0871ce8bb2db6cf4788/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:49 compute-0 sudo[96549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:38:49 compute-0 sudo[96549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:49 compute-0 podman[96515]: 2025-11-26 12:38:49.807772536 +0000 UTC m=+0.078124674 container init 58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681 (image=quay.io/ceph/ceph:v18, name=ecstatic_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 12:38:49 compute-0 podman[96515]: 2025-11-26 12:38:49.81258415 +0000 UTC m=+0.082936268 container start 58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681 (image=quay.io/ceph/ceph:v18, name=ecstatic_ishizaka, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 12:38:49 compute-0 podman[96515]: 2025-11-26 12:38:49.813855516 +0000 UTC m=+0.084207634 container attach 58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681 (image=quay.io/ceph/ceph:v18, name=ecstatic_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 12:38:49 compute-0 podman[96515]: 2025-11-26 12:38:49.747150449 +0000 UTC m=+0.017502587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:49 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v59: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:50 compute-0 podman[96611]: 2025-11-26 12:38:50.054662545 +0000 UTC m=+0.034428300 container create 8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 12:38:50 compute-0 systemd[1]: Started libpod-conmon-8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc.scope.
Nov 26 12:38:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:50 compute-0 podman[96611]: 2025-11-26 12:38:50.106962989 +0000 UTC m=+0.086728742 container init 8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:50 compute-0 podman[96611]: 2025-11-26 12:38:50.111516393 +0000 UTC m=+0.091282147 container start 8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:50 compute-0 podman[96611]: 2025-11-26 12:38:50.11261923 +0000 UTC m=+0.092384985 container attach 8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_tu, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:50 compute-0 musing_tu[96641]: 167 167
Nov 26 12:38:50 compute-0 systemd[1]: libpod-8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc.scope: Deactivated successfully.
Nov 26 12:38:50 compute-0 podman[96611]: 2025-11-26 12:38:50.115535899 +0000 UTC m=+0.095301654 container died 8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_tu, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e04ebd3e4396ff076163d9e232f433233387c310d8b80806be195ccadf5a1953-merged.mount: Deactivated successfully.
Nov 26 12:38:50 compute-0 podman[96611]: 2025-11-26 12:38:50.036279555 +0000 UTC m=+0.016045319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:50 compute-0 podman[96611]: 2025-11-26 12:38:50.136625311 +0000 UTC m=+0.116391064 container remove 8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_tu, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:50 compute-0 systemd[1]: libpod-conmon-8f215d46650030bb4aedf91517d3592b8d21873b3f9444198b2d466c64ec65bc.scope: Deactivated successfully.
Nov 26 12:38:50 compute-0 podman[96664]: 2025-11-26 12:38:50.246912636 +0000 UTC m=+0.026965871 container create e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:50 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:38:50 compute-0 ceph-mgr[75236]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 26 12:38:50 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 26 12:38:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 12:38:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:50 compute-0 ecstatic_ishizaka[96564]: Scheduled mds.cephfs update...
Nov 26 12:38:50 compute-0 systemd[1]: Started libpod-conmon-e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3.scope.
Nov 26 12:38:50 compute-0 systemd[1]: libpod-58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681.scope: Deactivated successfully.
Nov 26 12:38:50 compute-0 podman[96515]: 2025-11-26 12:38:50.282709564 +0000 UTC m=+0.553061682 container died 58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681 (image=quay.io/ceph/ceph:v18, name=ecstatic_ishizaka, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 12:38:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575c8b53fac472ddaccf92b86cfb26afd89d4f4bdfcb39ea0cefe533ac2fcd23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575c8b53fac472ddaccf92b86cfb26afd89d4f4bdfcb39ea0cefe533ac2fcd23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575c8b53fac472ddaccf92b86cfb26afd89d4f4bdfcb39ea0cefe533ac2fcd23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575c8b53fac472ddaccf92b86cfb26afd89d4f4bdfcb39ea0cefe533ac2fcd23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:50 compute-0 podman[96664]: 2025-11-26 12:38:50.308107648 +0000 UTC m=+0.088160883 container init e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mclean, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 12:38:50 compute-0 podman[96664]: 2025-11-26 12:38:50.313983364 +0000 UTC m=+0.094036600 container start e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mclean, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 12:38:50 compute-0 podman[96664]: 2025-11-26 12:38:50.315580285 +0000 UTC m=+0.095633521 container attach e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:50 compute-0 podman[96515]: 2025-11-26 12:38:50.319663501 +0000 UTC m=+0.590015619 container remove 58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681 (image=quay.io/ceph/ceph:v18, name=ecstatic_ishizaka, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 12:38:50 compute-0 systemd[1]: libpod-conmon-58c0cc91ee682b6075ea5f7da0065a1e181109b34d508e7413c748eae8166681.scope: Deactivated successfully.
Nov 26 12:38:50 compute-0 podman[96664]: 2025-11-26 12:38:50.235521564 +0000 UTC m=+0.015574799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:50 compute-0 sudo[96452]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:50 compute-0 ceph-mon[74966]: 3.b scrub starts
Nov 26 12:38:50 compute-0 ceph-mon[74966]: 3.b scrub ok
Nov 26 12:38:50 compute-0 ceph-mon[74966]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:38:50 compute-0 ceph-mon[74966]: Saving service mds.cephfs spec with placement compute-0
Nov 26 12:38:50 compute-0 ceph-mon[74966]: pgmap v59: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:50 compute-0 ceph-mon[74966]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:38:50 compute-0 ceph-mon[74966]: Saving service mds.cephfs spec with placement compute-0
Nov 26 12:38:50 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-7209af3e1e406b702564d5f908c68808509ea221c196a0871ce8bb2db6cf4788-merged.mount: Deactivated successfully.
Nov 26 12:38:50 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 26 12:38:50 compute-0 sudo[96771]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxcbyjqxsbwmtpbmsuxpiycpsuprrwjs ; /usr/bin/python3'
Nov 26 12:38:50 compute-0 sudo[96771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:50 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 26 12:38:50 compute-0 python3[96773]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 12:38:50 compute-0 sudo[96771]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:38:50 compute-0 sudo[96855]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqrccjquuckiwaiiyewdpfmrjvjnfowb ; /usr/bin/python3'
Nov 26 12:38:50 compute-0 sudo[96855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:51 compute-0 agitated_mclean[96680]: {
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:         "osd_id": 1,
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:         "type": "bluestore"
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:     },
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:         "osd_id": 2,
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:         "type": "bluestore"
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:     },
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:         "osd_id": 0,
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:         "type": "bluestore"
Nov 26 12:38:51 compute-0 agitated_mclean[96680]:     }
Nov 26 12:38:51 compute-0 agitated_mclean[96680]: }
Nov 26 12:38:51 compute-0 python3[96857]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764160730.610688-37129-213518900630394/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=c49cad1c73fc246f2066e2f44ed85f4bdde7800e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:38:51 compute-0 sudo[96855]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:51 compute-0 systemd[1]: libpod-e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3.scope: Deactivated successfully.
Nov 26 12:38:51 compute-0 podman[96664]: 2025-11-26 12:38:51.101689085 +0000 UTC m=+0.881742320 container died e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-575c8b53fac472ddaccf92b86cfb26afd89d4f4bdfcb39ea0cefe533ac2fcd23-merged.mount: Deactivated successfully.
Nov 26 12:38:51 compute-0 podman[96664]: 2025-11-26 12:38:51.134711234 +0000 UTC m=+0.914764469 container remove e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:51 compute-0 systemd[1]: libpod-conmon-e396a0dbaee94ed5eb484a8b55bfc8e6ee609509b12cef13dd1feaca31f7f5c3.scope: Deactivated successfully.
Nov 26 12:38:51 compute-0 sudo[96549]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:38:51 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:38:51 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:51 compute-0 sudo[96908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:51 compute-0 sudo[96908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:51 compute-0 sudo[96908]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:51 compute-0 sudo[96933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:38:51 compute-0 sudo[96933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:51 compute-0 sudo[96933]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:51 compute-0 sudo[96958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:51 compute-0 sudo[96958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:51 compute-0 sudo[96958]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:51 compute-0 sudo[96983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:51 compute-0 sudo[96983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:51 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.d deep-scrub starts
Nov 26 12:38:51 compute-0 sudo[96983]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:51 compute-0 sudo[97030]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tclyaofvkazlxwsndyvrjwpcynvdefiy ; /usr/bin/python3'
Nov 26 12:38:51 compute-0 sudo[97030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:51 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.d deep-scrub ok
Nov 26 12:38:51 compute-0 sudo[97032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:51 compute-0 sudo[97032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:51 compute-0 sudo[97032]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:51 compute-0 sudo[97059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 12:38:51 compute-0 sudo[97059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:51 compute-0 python3[97038]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:51 compute-0 podman[97084]: 2025-11-26 12:38:51.499678677 +0000 UTC m=+0.028042508 container create b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4 (image=quay.io/ceph/ceph:v18, name=gallant_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:51 compute-0 systemd[1]: Started libpod-conmon-b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4.scope.
Nov 26 12:38:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7134245cbc77ef0f01f8f2473051d8a8ea48c9c1b4426681111ab22162d3bb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7134245cbc77ef0f01f8f2473051d8a8ea48c9c1b4426681111ab22162d3bb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:51 compute-0 podman[97084]: 2025-11-26 12:38:51.546642463 +0000 UTC m=+0.075006324 container init b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4 (image=quay.io/ceph/ceph:v18, name=gallant_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:51 compute-0 podman[97084]: 2025-11-26 12:38:51.551826212 +0000 UTC m=+0.080190052 container start b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4 (image=quay.io/ceph/ceph:v18, name=gallant_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 12:38:51 compute-0 podman[97084]: 2025-11-26 12:38:51.553166587 +0000 UTC m=+0.081530427 container attach b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4 (image=quay.io/ceph/ceph:v18, name=gallant_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 12:38:51 compute-0 podman[97084]: 2025-11-26 12:38:51.489043596 +0000 UTC m=+0.017407466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:51 compute-0 ceph-mon[74966]: 2.e scrub starts
Nov 26 12:38:51 compute-0 ceph-mon[74966]: 2.e scrub ok
Nov 26 12:38:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:51 compute-0 podman[97156]: 2025-11-26 12:38:51.751499797 +0000 UTC m=+0.036650624 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 12:38:51 compute-0 podman[97156]: 2025-11-26 12:38:51.83102046 +0000 UTC m=+0.116171268 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:51 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v60: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:52 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 26 12:38:52 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2663828596' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 26 12:38:52 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2663828596' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 26 12:38:52 compute-0 systemd[1]: libpod-b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4.scope: Deactivated successfully.
Nov 26 12:38:52 compute-0 podman[97084]: 2025-11-26 12:38:52.04917903 +0000 UTC m=+0.577542871 container died b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4 (image=quay.io/ceph/ceph:v18, name=gallant_buck, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d7134245cbc77ef0f01f8f2473051d8a8ea48c9c1b4426681111ab22162d3bb-merged.mount: Deactivated successfully.
Nov 26 12:38:52 compute-0 podman[97084]: 2025-11-26 12:38:52.075187017 +0000 UTC m=+0.603550857 container remove b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4 (image=quay.io/ceph/ceph:v18, name=gallant_buck, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:52 compute-0 systemd[1]: libpod-conmon-b21c281db5ea120833b555bd977984cfe774050847be626e4ab0acf70a5be0c4.scope: Deactivated successfully.
Nov 26 12:38:52 compute-0 sudo[97030]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:52 compute-0 sudo[97059]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:52 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:38:52 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:52 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:38:52 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:52 compute-0 sudo[97287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:52 compute-0 sudo[97287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:52 compute-0 sudo[97287]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:52 compute-0 sudo[97312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:52 compute-0 sudo[97312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:52 compute-0 sudo[97312]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:52 compute-0 sudo[97337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:52 compute-0 sudo[97337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:52 compute-0 sudo[97337]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:52 compute-0 sudo[97362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:38:52 compute-0 sudo[97362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:52 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 26 12:38:52 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 26 12:38:52 compute-0 sudo[97422]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmybsqaomhlbbmmwntmbgpteqjnaaeeg ; /usr/bin/python3'
Nov 26 12:38:52 compute-0 sudo[97422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:52 compute-0 python3[97424]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:52 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 26 12:38:52 compute-0 ceph-mon[74966]: 3.d deep-scrub starts
Nov 26 12:38:52 compute-0 ceph-mon[74966]: 3.d deep-scrub ok
Nov 26 12:38:52 compute-0 ceph-mon[74966]: pgmap v60: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:52 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2663828596' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 26 12:38:52 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2663828596' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 26 12:38:52 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:52 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:52 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 26 12:38:52 compute-0 sudo[97362]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:52 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:38:52 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:52 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:38:52 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:38:52 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:38:52 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:52 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev da6f4f28-5072-4ae3-9595-ff4e2e68a273 does not exist
Nov 26 12:38:52 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 52a082a4-7168-4e4e-830b-3168386aba5c does not exist
Nov 26 12:38:52 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 23f2d4f4-ce91-4c7f-9410-3c88dd7e81c8 does not exist
Nov 26 12:38:52 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:38:52 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:38:52 compute-0 podman[97437]: 2025-11-26 12:38:52.665366714 +0000 UTC m=+0.056658750 container create 40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f (image=quay.io/ceph/ceph:v18, name=elated_mclaren, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:52 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:38:52 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:38:52 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:38:52 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:52 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 26 12:38:52 compute-0 systemd[1]: Started libpod-conmon-40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f.scope.
Nov 26 12:38:52 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 26 12:38:52 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c2dce4003744c699fad0e6d060ac3132b8237ef0ba4810b21fdb28d20477e5e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c2dce4003744c699fad0e6d060ac3132b8237ef0ba4810b21fdb28d20477e5e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:52 compute-0 podman[97437]: 2025-11-26 12:38:52.714609664 +0000 UTC m=+0.105901709 container init 40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f (image=quay.io/ceph/ceph:v18, name=elated_mclaren, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 12:38:52 compute-0 sudo[97455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:52 compute-0 sudo[97455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:52 compute-0 podman[97437]: 2025-11-26 12:38:52.720593485 +0000 UTC m=+0.111885500 container start 40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f (image=quay.io/ceph/ceph:v18, name=elated_mclaren, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 12:38:52 compute-0 sudo[97455]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:52 compute-0 podman[97437]: 2025-11-26 12:38:52.724925031 +0000 UTC m=+0.116217066 container attach 40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f (image=quay.io/ceph/ceph:v18, name=elated_mclaren, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:52 compute-0 podman[97437]: 2025-11-26 12:38:52.651117105 +0000 UTC m=+0.042409150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:52 compute-0 sudo[97486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:52 compute-0 sudo[97486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:52 compute-0 sudo[97486]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:52 compute-0 sudo[97511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:52 compute-0 sudo[97511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:52 compute-0 sudo[97511]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:52 compute-0 sudo[97536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:38:52 compute-0 sudo[97536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:53 compute-0 podman[97612]: 2025-11-26 12:38:53.077323724 +0000 UTC m=+0.024682041 container create 9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 26 12:38:53 compute-0 systemd[1]: Started libpod-conmon-9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a.scope.
Nov 26 12:38:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:53 compute-0 podman[97612]: 2025-11-26 12:38:53.128678558 +0000 UTC m=+0.076036895 container init 9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 12:38:53 compute-0 podman[97612]: 2025-11-26 12:38:53.133470404 +0000 UTC m=+0.080828721 container start 9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:53 compute-0 podman[97612]: 2025-11-26 12:38:53.134659062 +0000 UTC m=+0.082017380 container attach 9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 12:38:53 compute-0 funny_lumiere[97625]: 167 167
Nov 26 12:38:53 compute-0 systemd[1]: libpod-9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a.scope: Deactivated successfully.
Nov 26 12:38:53 compute-0 conmon[97625]: conmon 9812cd3e44937140d798 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a.scope/container/memory.events
Nov 26 12:38:53 compute-0 podman[97612]: 2025-11-26 12:38:53.137578106 +0000 UTC m=+0.084936423 container died 9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2eaa0336e6fefa5578f3341062557e56b74e3def2d7a3db977af470202f307df-merged.mount: Deactivated successfully.
Nov 26 12:38:53 compute-0 podman[97612]: 2025-11-26 12:38:53.161140997 +0000 UTC m=+0.108499314 container remove 9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:53 compute-0 podman[97612]: 2025-11-26 12:38:53.067032492 +0000 UTC m=+0.014390829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:53 compute-0 systemd[1]: libpod-conmon-9812cd3e44937140d798655a41dddb20181fb4a02a9ffd39df589088d7d8628a.scope: Deactivated successfully.
Nov 26 12:38:53 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 12:38:53 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011144616' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 12:38:53 compute-0 elated_mclaren[97470]: 
Nov 26 12:38:53 compute-0 elated_mclaren[97470]: {"fsid":"f7d7fe93-41e5-51c4-b72d-63b38686102e","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":117,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":3,"osd_up_since":1764160707,"num_in_osds":3,"osd_in_since":1764160688,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":131}],"num_pgs":131,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83795968,"bytes_avail":64328130560,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-26T12:38:37.846513+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 26 12:38:53 compute-0 systemd[1]: libpod-40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f.scope: Deactivated successfully.
Nov 26 12:38:53 compute-0 podman[97437]: 2025-11-26 12:38:53.225133292 +0000 UTC m=+0.616425316 container died 40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f (image=quay.io/ceph/ceph:v18, name=elated_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 12:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c2dce4003744c699fad0e6d060ac3132b8237ef0ba4810b21fdb28d20477e5e-merged.mount: Deactivated successfully.
Nov 26 12:38:53 compute-0 podman[97437]: 2025-11-26 12:38:53.249917604 +0000 UTC m=+0.641209629 container remove 40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f (image=quay.io/ceph/ceph:v18, name=elated_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 12:38:53 compute-0 systemd[1]: libpod-conmon-40fb31c1c30946de36f01f0fd4e411697ec442bba608bddc8c2407ea6580ca3f.scope: Deactivated successfully.
Nov 26 12:38:53 compute-0 sudo[97422]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:53 compute-0 podman[97657]: 2025-11-26 12:38:53.294982778 +0000 UTC m=+0.036154025 container create 19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:53 compute-0 systemd[1]: Started libpod-conmon-19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292.scope.
Nov 26 12:38:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caebc6b50543dacdba568243871181f2a00d3d14816f71705b1bb62376f18c23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caebc6b50543dacdba568243871181f2a00d3d14816f71705b1bb62376f18c23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caebc6b50543dacdba568243871181f2a00d3d14816f71705b1bb62376f18c23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caebc6b50543dacdba568243871181f2a00d3d14816f71705b1bb62376f18c23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caebc6b50543dacdba568243871181f2a00d3d14816f71705b1bb62376f18c23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:53 compute-0 podman[97657]: 2025-11-26 12:38:53.350995876 +0000 UTC m=+0.092167133 container init 19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 12:38:53 compute-0 podman[97657]: 2025-11-26 12:38:53.35731451 +0000 UTC m=+0.098485758 container start 19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 12:38:53 compute-0 podman[97657]: 2025-11-26 12:38:53.358708127 +0000 UTC m=+0.099879384 container attach 19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 12:38:53 compute-0 sudo[97700]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnjetzrhlnhusqweaqxyisysddixozew ; /usr/bin/python3'
Nov 26 12:38:53 compute-0 sudo[97700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:53 compute-0 podman[97657]: 2025-11-26 12:38:53.284431174 +0000 UTC m=+0.025602431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:53 compute-0 python3[97702]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:53 compute-0 podman[97703]: 2025-11-26 12:38:53.532876266 +0000 UTC m=+0.030204349 container create d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0 (image=quay.io/ceph/ceph:v18, name=compassionate_noyce, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:38:53 compute-0 systemd[1]: Started libpod-conmon-d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0.scope.
Nov 26 12:38:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad06ef884eddb201d2143476b3c208c884d8768191ac51907122995f3fa2865a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad06ef884eddb201d2143476b3c208c884d8768191ac51907122995f3fa2865a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:53 compute-0 podman[97703]: 2025-11-26 12:38:53.585120101 +0000 UTC m=+0.082448195 container init d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0 (image=quay.io/ceph/ceph:v18, name=compassionate_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 12:38:53 compute-0 podman[97703]: 2025-11-26 12:38:53.589690569 +0000 UTC m=+0.087018652 container start d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0 (image=quay.io/ceph/ceph:v18, name=compassionate_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:53 compute-0 podman[97703]: 2025-11-26 12:38:53.591824456 +0000 UTC m=+0.089152559 container attach d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0 (image=quay.io/ceph/ceph:v18, name=compassionate_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 12:38:53 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 26 12:38:53 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 26 12:38:53 compute-0 podman[97703]: 2025-11-26 12:38:53.520999805 +0000 UTC m=+0.018327908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:53 compute-0 ceph-mon[74966]: 3.10 scrub starts
Nov 26 12:38:53 compute-0 ceph-mon[74966]: 3.10 scrub ok
Nov 26 12:38:53 compute-0 ceph-mon[74966]: 4.c scrub starts
Nov 26 12:38:53 compute-0 ceph-mon[74966]: 4.c scrub ok
Nov 26 12:38:53 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:53 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:38:53 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:53 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:38:53 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:38:53 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:53 compute-0 ceph-mon[74966]: 2.10 scrub starts
Nov 26 12:38:53 compute-0 ceph-mon[74966]: 2.10 scrub ok
Nov 26 12:38:53 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2011144616' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 12:38:53 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 26 12:38:53 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 26 12:38:53 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v61: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:54 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 12:38:54 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209780401' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 12:38:54 compute-0 compassionate_noyce[97715]: 
Nov 26 12:38:54 compute-0 compassionate_noyce[97715]: {"epoch":1,"fsid":"f7d7fe93-41e5-51c4-b72d-63b38686102e","modified":"2025-11-26T12:36:52.476654Z","created":"2025-11-26T12:36:52.476654Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 26 12:38:54 compute-0 compassionate_noyce[97715]: dumped monmap epoch 1
Nov 26 12:38:54 compute-0 systemd[1]: libpod-d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0.scope: Deactivated successfully.
Nov 26 12:38:54 compute-0 podman[97703]: 2025-11-26 12:38:54.115614835 +0000 UTC m=+0.612942919 container died d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0 (image=quay.io/ceph/ceph:v18, name=compassionate_noyce, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:38:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad06ef884eddb201d2143476b3c208c884d8768191ac51907122995f3fa2865a-merged.mount: Deactivated successfully.
Nov 26 12:38:54 compute-0 podman[97703]: 2025-11-26 12:38:54.140354784 +0000 UTC m=+0.637682866 container remove d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0 (image=quay.io/ceph/ceph:v18, name=compassionate_noyce, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 26 12:38:54 compute-0 sudo[97700]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:54 compute-0 systemd[1]: libpod-conmon-d9a3effa084fd0329d280de3157ad33e42fe356e4e515d16b9ff4ea4aba3cfb0.scope: Deactivated successfully.
Nov 26 12:38:54 compute-0 hopeful_kepler[97672]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:38:54 compute-0 hopeful_kepler[97672]: --> relative data size: 1.0
Nov 26 12:38:54 compute-0 hopeful_kepler[97672]: --> All data devices are unavailable
Nov 26 12:38:54 compute-0 systemd[1]: libpod-19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292.scope: Deactivated successfully.
Nov 26 12:38:54 compute-0 podman[97657]: 2025-11-26 12:38:54.186035102 +0000 UTC m=+0.927206349 container died 19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 12:38:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-caebc6b50543dacdba568243871181f2a00d3d14816f71705b1bb62376f18c23-merged.mount: Deactivated successfully.
Nov 26 12:38:54 compute-0 podman[97657]: 2025-11-26 12:38:54.216109684 +0000 UTC m=+0.957280931 container remove 19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:54 compute-0 systemd[1]: libpod-conmon-19f505eb41eae3a1d44cb510fdca3eb41942c4ad668ad17d6c3675b5748ad292.scope: Deactivated successfully.
Nov 26 12:38:54 compute-0 sudo[97536]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:54 compute-0 sudo[97784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:54 compute-0 sudo[97784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:54 compute-0 sudo[97784]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:54 compute-0 sudo[97809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:54 compute-0 sudo[97809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:54 compute-0 sudo[97809]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:54 compute-0 sudo[97834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:54 compute-0 sudo[97834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:54 compute-0 sudo[97834]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:54 compute-0 sudo[97859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:38:54 compute-0 sudo[97859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:54 compute-0 sudo[97907]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzxxaydkvcsjmyqtqsclwsgdglakwaxw ; /usr/bin/python3'
Nov 26 12:38:54 compute-0 sudo[97907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:54 compute-0 python3[97909]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:54 compute-0 podman[97930]: 2025-11-26 12:38:54.604391679 +0000 UTC m=+0.034693502 container create 3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e (image=quay.io/ceph/ceph:v18, name=adoring_fermat, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:54 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.16 deep-scrub starts
Nov 26 12:38:54 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.16 deep-scrub ok
Nov 26 12:38:54 compute-0 ceph-mon[74966]: 4.15 scrub starts
Nov 26 12:38:54 compute-0 ceph-mon[74966]: 4.15 scrub ok
Nov 26 12:38:54 compute-0 ceph-mon[74966]: 2.12 scrub starts
Nov 26 12:38:54 compute-0 ceph-mon[74966]: 2.12 scrub ok
Nov 26 12:38:54 compute-0 ceph-mon[74966]: pgmap v61: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:54 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3209780401' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 12:38:54 compute-0 systemd[1]: Started libpod-conmon-3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e.scope.
Nov 26 12:38:54 compute-0 podman[97950]: 2025-11-26 12:38:54.657086819 +0000 UTC m=+0.041797532 container create 80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 12:38:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83166193ac9c4a6994358f41d6f07a10818eb635d01d588716f4e0e0e8b277af/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83166193ac9c4a6994358f41d6f07a10818eb635d01d588716f4e0e0e8b277af/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:54 compute-0 podman[97930]: 2025-11-26 12:38:54.67515289 +0000 UTC m=+0.105454732 container init 3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e (image=quay.io/ceph/ceph:v18, name=adoring_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 12:38:54 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 26 12:38:54 compute-0 systemd[1]: Started libpod-conmon-80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55.scope.
Nov 26 12:38:54 compute-0 podman[97930]: 2025-11-26 12:38:54.681815465 +0000 UTC m=+0.112117288 container start 3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e (image=quay.io/ceph/ceph:v18, name=adoring_fermat, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:54 compute-0 podman[97930]: 2025-11-26 12:38:54.586630934 +0000 UTC m=+0.016932758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:54 compute-0 podman[97930]: 2025-11-26 12:38:54.686927277 +0000 UTC m=+0.117229100 container attach 3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e (image=quay.io/ceph/ceph:v18, name=adoring_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:54 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 26 12:38:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:54 compute-0 podman[97950]: 2025-11-26 12:38:54.706225229 +0000 UTC m=+0.090935942 container init 80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 12:38:54 compute-0 podman[97950]: 2025-11-26 12:38:54.710785928 +0000 UTC m=+0.095496621 container start 80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:54 compute-0 podman[97950]: 2025-11-26 12:38:54.711856674 +0000 UTC m=+0.096567397 container attach 80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 12:38:54 compute-0 naughty_panini[97969]: 167 167
Nov 26 12:38:54 compute-0 systemd[1]: libpod-80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55.scope: Deactivated successfully.
Nov 26 12:38:54 compute-0 podman[97950]: 2025-11-26 12:38:54.713896593 +0000 UTC m=+0.098607306 container died 80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-23315a006364f2b7fa7662a5ff6b068b04e44a01f6d2600eefa498eebd3302bb-merged.mount: Deactivated successfully.
Nov 26 12:38:54 compute-0 podman[97950]: 2025-11-26 12:38:54.643070749 +0000 UTC m=+0.027781472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:54 compute-0 podman[97950]: 2025-11-26 12:38:54.741468189 +0000 UTC m=+0.126178892 container remove 80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 26 12:38:54 compute-0 systemd[1]: libpod-conmon-80acaae640b503a0a152327cbad0cede773f65edee42d947b36c1e1fcf046a55.scope: Deactivated successfully.
Nov 26 12:38:54 compute-0 podman[97990]: 2025-11-26 12:38:54.85134136 +0000 UTC m=+0.026439192 container create 7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 12:38:54 compute-0 systemd[1]: Started libpod-conmon-7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c.scope.
Nov 26 12:38:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4dea14f4c959ec6d644392073fe87c8a0ac53caeb6c494f6d362064ae0b974/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4dea14f4c959ec6d644392073fe87c8a0ac53caeb6c494f6d362064ae0b974/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4dea14f4c959ec6d644392073fe87c8a0ac53caeb6c494f6d362064ae0b974/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4dea14f4c959ec6d644392073fe87c8a0ac53caeb6c494f6d362064ae0b974/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:54 compute-0 podman[97990]: 2025-11-26 12:38:54.913063978 +0000 UTC m=+0.088161809 container init 7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 12:38:54 compute-0 podman[97990]: 2025-11-26 12:38:54.918024264 +0000 UTC m=+0.093122096 container start 7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:54 compute-0 podman[97990]: 2025-11-26 12:38:54.91925764 +0000 UTC m=+0.094355470 container attach 7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_matsumoto, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:54 compute-0 podman[97990]: 2025-11-26 12:38:54.840650285 +0000 UTC m=+0.015748136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 26 12:38:55 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1798191645' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 26 12:38:55 compute-0 adoring_fermat[97961]: [client.openstack]
Nov 26 12:38:55 compute-0 adoring_fermat[97961]:         key = AQBP9CZpAAAAABAAMO+aLuzMDoNYc4bplXQ8ZQ==
Nov 26 12:38:55 compute-0 adoring_fermat[97961]:         caps mgr = "allow *"
Nov 26 12:38:55 compute-0 adoring_fermat[97961]:         caps mon = "profile rbd"
Nov 26 12:38:55 compute-0 adoring_fermat[97961]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 26 12:38:55 compute-0 systemd[1]: libpod-3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e.scope: Deactivated successfully.
Nov 26 12:38:55 compute-0 podman[97930]: 2025-11-26 12:38:55.196216258 +0000 UTC m=+0.626518081 container died 3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e (image=quay.io/ceph/ceph:v18, name=adoring_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 26 12:38:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-83166193ac9c4a6994358f41d6f07a10818eb635d01d588716f4e0e0e8b277af-merged.mount: Deactivated successfully.
Nov 26 12:38:55 compute-0 podman[97930]: 2025-11-26 12:38:55.220656742 +0000 UTC m=+0.650958565 container remove 3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e (image=quay.io/ceph/ceph:v18, name=adoring_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 12:38:55 compute-0 sudo[97907]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:55 compute-0 systemd[1]: libpod-conmon-3abd7fbcfef9545abb22e15b9407617d19015c47db8253f118c7d81514f5ea9e.scope: Deactivated successfully.
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]: {
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:     "0": [
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:         {
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "devices": [
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "/dev/loop3"
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             ],
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "lv_name": "ceph_lv0",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "lv_size": "21470642176",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "name": "ceph_lv0",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "tags": {
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.cluster_name": "ceph",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.crush_device_class": "",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.encrypted": "0",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.osd_id": "0",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.type": "block",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.vdo": "0"
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             },
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "type": "block",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "vg_name": "ceph_vg0"
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:         }
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:     ],
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:     "1": [
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:         {
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "devices": [
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "/dev/loop4"
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             ],
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "lv_name": "ceph_lv1",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "lv_size": "21470642176",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "name": "ceph_lv1",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "tags": {
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.cluster_name": "ceph",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.crush_device_class": "",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.encrypted": "0",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.osd_id": "1",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.type": "block",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.vdo": "0"
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             },
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "type": "block",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "vg_name": "ceph_vg1"
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:         }
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:     ],
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:     "2": [
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:         {
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "devices": [
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "/dev/loop5"
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             ],
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "lv_name": "ceph_lv2",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "lv_size": "21470642176",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "name": "ceph_lv2",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "tags": {
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.cluster_name": "ceph",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.crush_device_class": "",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.encrypted": "0",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.osd_id": "2",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.type": "block",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:                 "ceph.vdo": "0"
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             },
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "type": "block",
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:             "vg_name": "ceph_vg2"
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:         }
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]:     ]
Nov 26 12:38:55 compute-0 quizzical_matsumoto[98003]: }
Nov 26 12:38:55 compute-0 systemd[1]: libpod-7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c.scope: Deactivated successfully.
Nov 26 12:38:55 compute-0 conmon[98003]: conmon 7cbb2fb69fefb087a106 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c.scope/container/memory.events
Nov 26 12:38:55 compute-0 podman[97990]: 2025-11-26 12:38:55.558511802 +0000 UTC m=+0.733609633 container died 7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 12:38:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a4dea14f4c959ec6d644392073fe87c8a0ac53caeb6c494f6d362064ae0b974-merged.mount: Deactivated successfully.
Nov 26 12:38:55 compute-0 podman[97990]: 2025-11-26 12:38:55.590818288 +0000 UTC m=+0.765916119 container remove 7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_matsumoto, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 12:38:55 compute-0 systemd[1]: libpod-conmon-7cbb2fb69fefb087a106f429b3d8c4426a4c907cf317501a500e973bbfc79c0c.scope: Deactivated successfully.
Nov 26 12:38:55 compute-0 sudo[97859]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:55 compute-0 ceph-mon[74966]: 4.16 deep-scrub starts
Nov 26 12:38:55 compute-0 ceph-mon[74966]: 4.16 deep-scrub ok
Nov 26 12:38:55 compute-0 ceph-mon[74966]: 2.14 scrub starts
Nov 26 12:38:55 compute-0 ceph-mon[74966]: 2.14 scrub ok
Nov 26 12:38:55 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1798191645' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 26 12:38:55 compute-0 sudo[98053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:55 compute-0 sudo[98053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:55 compute-0 sudo[98053]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:55 compute-0 sudo[98078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:55 compute-0 sudo[98078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:55 compute-0 sudo[98078]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:55 compute-0 sudo[98103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:55 compute-0 sudo[98103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:55 compute-0 sudo[98103]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:55 compute-0 sudo[98128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:38:55 compute-0 sudo[98128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:55 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v62: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:38:56 compute-0 podman[98182]: 2025-11-26 12:38:56.014197826 +0000 UTC m=+0.027293512 container create 3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:56 compute-0 systemd[1]: Started libpod-conmon-3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108.scope.
Nov 26 12:38:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:56 compute-0 podman[98182]: 2025-11-26 12:38:56.069018819 +0000 UTC m=+0.082114514 container init 3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 12:38:56 compute-0 podman[98182]: 2025-11-26 12:38:56.073911989 +0000 UTC m=+0.087007665 container start 3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 12:38:56 compute-0 podman[98182]: 2025-11-26 12:38:56.075071954 +0000 UTC m=+0.088167631 container attach 3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 26 12:38:56 compute-0 nostalgic_franklin[98214]: 167 167
Nov 26 12:38:56 compute-0 systemd[1]: libpod-3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108.scope: Deactivated successfully.
Nov 26 12:38:56 compute-0 podman[98182]: 2025-11-26 12:38:56.077317947 +0000 UTC m=+0.090413623 container died 3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-de888c48f736ee103f8bdd39f5bc2ee61353bf0d1f59b9782fda337fda75866b-merged.mount: Deactivated successfully.
Nov 26 12:38:56 compute-0 podman[98182]: 2025-11-26 12:38:56.097372056 +0000 UTC m=+0.110467732 container remove 3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:56 compute-0 podman[98182]: 2025-11-26 12:38:56.003162962 +0000 UTC m=+0.016258658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:56 compute-0 systemd[1]: libpod-conmon-3dcef195c0ce7105061e4d85368efc2937241d8c9932b321c4fe528991205108.scope: Deactivated successfully.
Nov 26 12:38:56 compute-0 podman[98299]: 2025-11-26 12:38:56.213480155 +0000 UTC m=+0.030896961 container create 28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 12:38:56 compute-0 systemd[1]: Started libpod-conmon-28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e.scope.
Nov 26 12:38:56 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.13 deep-scrub starts
Nov 26 12:38:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e46bcd953ba28fdf7e821835170edb4d767f9f1ca28c9369a9133519abc1e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e46bcd953ba28fdf7e821835170edb4d767f9f1ca28c9369a9133519abc1e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e46bcd953ba28fdf7e821835170edb4d767f9f1ca28c9369a9133519abc1e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e46bcd953ba28fdf7e821835170edb4d767f9f1ca28c9369a9133519abc1e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:56 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.13 deep-scrub ok
Nov 26 12:38:56 compute-0 podman[98299]: 2025-11-26 12:38:56.265264094 +0000 UTC m=+0.082680910 container init 28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 12:38:56 compute-0 podman[98299]: 2025-11-26 12:38:56.271915298 +0000 UTC m=+0.089332094 container start 28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:56 compute-0 podman[98299]: 2025-11-26 12:38:56.273002577 +0000 UTC m=+0.090419362 container attach 28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 12:38:56 compute-0 podman[98299]: 2025-11-26 12:38:56.201267773 +0000 UTC m=+0.018684589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:56 compute-0 sudo[98383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvakqaowuxblymnuhpvsacuvecvphutu ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764160736.0507755-37201-152950182947434/async_wrapper.py j562982286282 30 /home/zuul/.ansible/tmp/ansible-tmp-1764160736.0507755-37201-152950182947434/AnsiballZ_command.py _'
Nov 26 12:38:56 compute-0 sudo[98383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:56 compute-0 ansible-async_wrapper.py[98385]: Invoked with j562982286282 30 /home/zuul/.ansible/tmp/ansible-tmp-1764160736.0507755-37201-152950182947434/AnsiballZ_command.py _
Nov 26 12:38:56 compute-0 ansible-async_wrapper.py[98388]: Starting module and watcher
Nov 26 12:38:56 compute-0 ansible-async_wrapper.py[98388]: Start watching 98389 (30)
Nov 26 12:38:56 compute-0 ansible-async_wrapper.py[98389]: Start module (98389)
Nov 26 12:38:56 compute-0 ansible-async_wrapper.py[98385]: Return async_wrapper task started.
Nov 26 12:38:56 compute-0 sudo[98383]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:56 compute-0 python3[98390]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:56 compute-0 podman[98391]: 2025-11-26 12:38:56.595533259 +0000 UTC m=+0.029962490 container create 7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a (image=quay.io/ceph/ceph:v18, name=quirky_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 12:38:56 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 26 12:38:56 compute-0 systemd[1]: Started libpod-conmon-7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a.scope.
Nov 26 12:38:56 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 26 12:38:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cdaa26a384c24a12f358da6a6db0bf1cdd284f5f125f70124da3bad87fa3b29/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:56 compute-0 ceph-mon[74966]: pgmap v62: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cdaa26a384c24a12f358da6a6db0bf1cdd284f5f125f70124da3bad87fa3b29/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:56 compute-0 podman[98391]: 2025-11-26 12:38:56.643285862 +0000 UTC m=+0.077715094 container init 7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a (image=quay.io/ceph/ceph:v18, name=quirky_mendeleev, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:56 compute-0 podman[98391]: 2025-11-26 12:38:56.647481919 +0000 UTC m=+0.081911151 container start 7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a (image=quay.io/ceph/ceph:v18, name=quirky_mendeleev, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:38:56 compute-0 podman[98391]: 2025-11-26 12:38:56.648607359 +0000 UTC m=+0.083036591 container attach 7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a (image=quay.io/ceph/ceph:v18, name=quirky_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:56 compute-0 podman[98391]: 2025-11-26 12:38:56.583357645 +0000 UTC m=+0.017786888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:57 compute-0 gifted_chaum[98353]: {
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:         "osd_id": 1,
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:         "type": "bluestore"
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:     },
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:         "osd_id": 2,
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:         "type": "bluestore"
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:     },
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:         "osd_id": 0,
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:         "type": "bluestore"
Nov 26 12:38:57 compute-0 gifted_chaum[98353]:     }
Nov 26 12:38:57 compute-0 gifted_chaum[98353]: }
Nov 26 12:38:57 compute-0 systemd[1]: libpod-28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e.scope: Deactivated successfully.
Nov 26 12:38:57 compute-0 podman[98299]: 2025-11-26 12:38:57.044187533 +0000 UTC m=+0.861604330 container died 28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 12:38:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4e46bcd953ba28fdf7e821835170edb4d767f9f1ca28c9369a9133519abc1e8-merged.mount: Deactivated successfully.
Nov 26 12:38:57 compute-0 podman[98299]: 2025-11-26 12:38:57.07374537 +0000 UTC m=+0.891162167 container remove 28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:57 compute-0 systemd[1]: libpod-conmon-28ca2878c5fa30557cd44cf76a5694213e18361bd6a9cc8b896a3ec9244f771e.scope: Deactivated successfully.
Nov 26 12:38:57 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 12:38:57 compute-0 quirky_mendeleev[98403]: 
Nov 26 12:38:57 compute-0 quirky_mendeleev[98403]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 12:38:57 compute-0 sudo[98128]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:38:57 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:38:57 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:57 compute-0 ceph-mgr[75236]: [progress INFO root] update: starting ev 30acd692-57bd-49fd-ae87-1be5cad78c57 (Updating rgw.rgw deployment (+1 -> 1))
Nov 26 12:38:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cpfqrx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 26 12:38:57 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cpfqrx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 26 12:38:57 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cpfqrx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 26 12:38:57 compute-0 systemd[1]: libpod-7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a.scope: Deactivated successfully.
Nov 26 12:38:57 compute-0 podman[98391]: 2025-11-26 12:38:57.111715691 +0000 UTC m=+0.546144933 container died 7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a (image=quay.io/ceph/ceph:v18, name=quirky_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:38:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 26 12:38:57 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:38:57 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:57 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.cpfqrx on compute-0
Nov 26 12:38:57 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.cpfqrx on compute-0
Nov 26 12:38:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cdaa26a384c24a12f358da6a6db0bf1cdd284f5f125f70124da3bad87fa3b29-merged.mount: Deactivated successfully.
Nov 26 12:38:57 compute-0 podman[98391]: 2025-11-26 12:38:57.134576529 +0000 UTC m=+0.569005760 container remove 7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a (image=quay.io/ceph/ceph:v18, name=quirky_mendeleev, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:57 compute-0 systemd[1]: libpod-conmon-7e9646a7c362e433a745c87ff9e89692eeb4b078851a7f94100f47031243451a.scope: Deactivated successfully.
Nov 26 12:38:57 compute-0 ansible-async_wrapper.py[98389]: Module complete (98389)
Nov 26 12:38:57 compute-0 sudo[98467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:57 compute-0 sudo[98467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:57 compute-0 sudo[98467]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:57 compute-0 sudo[98500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:57 compute-0 sudo[98500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:57 compute-0 sudo[98500]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:57 compute-0 sudo[98525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:57 compute-0 sudo[98525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:57 compute-0 sudo[98525]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:57 compute-0 sudo[98550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:38:57 compute-0 sudo[98550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:57 compute-0 podman[98632]: 2025-11-26 12:38:57.536367074 +0000 UTC m=+0.028566300 container create e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jones, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 12:38:57 compute-0 systemd[1]: Started libpod-conmon-e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca.scope.
Nov 26 12:38:57 compute-0 sudo[98665]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edfeqfuepzjjrmcebkcihuiuyqipuvkq ; /usr/bin/python3'
Nov 26 12:38:57 compute-0 sudo[98665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:57 compute-0 podman[98632]: 2025-11-26 12:38:57.587401742 +0000 UTC m=+0.079600968 container init e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jones, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:57 compute-0 podman[98632]: 2025-11-26 12:38:57.591557884 +0000 UTC m=+0.083757110 container start e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jones, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:57 compute-0 podman[98632]: 2025-11-26 12:38:57.592529655 +0000 UTC m=+0.084728881 container attach e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 26 12:38:57 compute-0 determined_jones[98670]: 167 167
Nov 26 12:38:57 compute-0 systemd[1]: libpod-e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca.scope: Deactivated successfully.
Nov 26 12:38:57 compute-0 podman[98632]: 2025-11-26 12:38:57.595526573 +0000 UTC m=+0.087725799 container died e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 12:38:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3832eadfe4fce13cf2817ff5954d48f324a5484f1635ac8ce01be643a41c36e8-merged.mount: Deactivated successfully.
Nov 26 12:38:57 compute-0 podman[98632]: 2025-11-26 12:38:57.611644113 +0000 UTC m=+0.103843339 container remove e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jones, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 12:38:57 compute-0 podman[98632]: 2025-11-26 12:38:57.524643763 +0000 UTC m=+0.016842999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:57 compute-0 systemd[1]: libpod-conmon-e9dd07b3cf31d3a4bd89d06a2e69960cd74fb4d2f4a3dc9fb4d719b3733689ca.scope: Deactivated successfully.
Nov 26 12:38:57 compute-0 ceph-mon[74966]: 3.13 deep-scrub starts
Nov 26 12:38:57 compute-0 ceph-mon[74966]: 3.13 deep-scrub ok
Nov 26 12:38:57 compute-0 ceph-mon[74966]: 4.17 scrub starts
Nov 26 12:38:57 compute-0 ceph-mon[74966]: 4.17 scrub ok
Nov 26 12:38:57 compute-0 ceph-mon[74966]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 12:38:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cpfqrx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 26 12:38:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cpfqrx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 26 12:38:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:57 compute-0 ceph-mon[74966]: Deploying daemon rgw.rgw.compute-0.cpfqrx on compute-0
Nov 26 12:38:57 compute-0 systemd[1]: Reloading.
Nov 26 12:38:57 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 26 12:38:57 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 26 12:38:57 compute-0 python3[98672]: ansible-ansible.legacy.async_status Invoked with jid=j562982286282.98385 mode=status _async_dir=/root/.ansible_async
Nov 26 12:38:57 compute-0 sudo[98665]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:57 compute-0 systemd-rc-local-generator[98714]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:38:57 compute-0 systemd-sysv-generator[98717]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:38:57 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v63: 131 pgs: 1 active+clean+scrubbing, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:57 compute-0 sudo[98770]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbjudzimcwjhtsqkbnjepxbhrkgtgqfs ; /usr/bin/python3'
Nov 26 12:38:57 compute-0 sudo[98770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:57 compute-0 systemd[1]: Reloading.
Nov 26 12:38:57 compute-0 systemd-rc-local-generator[98797]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:38:57 compute-0 systemd-sysv-generator[98804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:38:57 compute-0 python3[98774]: ansible-ansible.legacy.async_status Invoked with jid=j562982286282.98385 mode=cleanup _async_dir=/root/.ansible_async
Nov 26 12:38:58 compute-0 sudo[98770]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:58 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.cpfqrx for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 12:38:58 compute-0 podman[98853]: 2025-11-26 12:38:58.293003347 +0000 UTC m=+0.030078210 container create 24825469580cdc96d0d87c6665027cd044645eaba023b274e077df7b42c86760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-rgw-rgw-compute-0-cpfqrx, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 12:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29e8f225175f9bba21d8c8460dfd9699a96cfb3e839890002fddee43c7b13d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29e8f225175f9bba21d8c8460dfd9699a96cfb3e839890002fddee43c7b13d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29e8f225175f9bba21d8c8460dfd9699a96cfb3e839890002fddee43c7b13d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29e8f225175f9bba21d8c8460dfd9699a96cfb3e839890002fddee43c7b13d8/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.cpfqrx supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:58 compute-0 podman[98853]: 2025-11-26 12:38:58.332397959 +0000 UTC m=+0.069472823 container init 24825469580cdc96d0d87c6665027cd044645eaba023b274e077df7b42c86760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-rgw-rgw-compute-0-cpfqrx, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 26 12:38:58 compute-0 podman[98853]: 2025-11-26 12:38:58.336178343 +0000 UTC m=+0.073253207 container start 24825469580cdc96d0d87c6665027cd044645eaba023b274e077df7b42c86760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-rgw-rgw-compute-0-cpfqrx, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:58 compute-0 bash[98853]: 24825469580cdc96d0d87c6665027cd044645eaba023b274e077df7b42c86760
Nov 26 12:38:58 compute-0 podman[98853]: 2025-11-26 12:38:58.280897875 +0000 UTC m=+0.017972759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:58 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.cpfqrx for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 12:38:58 compute-0 sudo[98550]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:38:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:38:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 12:38:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:58 compute-0 radosgw[98869]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 26 12:38:58 compute-0 radosgw[98869]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 26 12:38:58 compute-0 radosgw[98869]: framework: beast
Nov 26 12:38:58 compute-0 ceph-mgr[75236]: [progress INFO root] complete: finished ev 30acd692-57bd-49fd-ae87-1be5cad78c57 (Updating rgw.rgw deployment (+1 -> 1))
Nov 26 12:38:58 compute-0 ceph-mgr[75236]: [progress INFO root] Completed event 30acd692-57bd-49fd-ae87-1be5cad78c57 (Updating rgw.rgw deployment (+1 -> 1)) in 1 seconds
Nov 26 12:38:58 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 26 12:38:58 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 26 12:38:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 12:38:58 compute-0 radosgw[98869]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 26 12:38:58 compute-0 radosgw[98869]: init_numa not setting numa affinity
Nov 26 12:38:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 12:38:58 compute-0 sudo[98901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwuzujjivdjdmyqawwhdxngvprwfvkjh ; /usr/bin/python3'
Nov 26 12:38:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:58 compute-0 ceph-mgr[75236]: [progress INFO root] update: starting ev 13887e53-0170-459f-8503-4b4ba35e9b94 (Updating mds.cephfs deployment (+1 -> 1))
Nov 26 12:38:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ipyiim", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 26 12:38:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ipyiim", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 26 12:38:58 compute-0 sudo[98901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ipyiim", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 26 12:38:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:38:58 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:58 compute-0 ceph-mgr[75236]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.ipyiim on compute-0
Nov 26 12:38:58 compute-0 ceph-mgr[75236]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.ipyiim on compute-0
Nov 26 12:38:58 compute-0 sudo[98932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:58 compute-0 sudo[98932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:58 compute-0 sudo[98932]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:58 compute-0 sudo[98982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:58 compute-0 sudo[98982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:58 compute-0 sudo[98982]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:58 compute-0 python3[98915]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:58 compute-0 sudo[99007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:58 compute-0 sudo[99007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:58 compute-0 sudo[99007]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:58 compute-0 podman[99030]: 2025-11-26 12:38:58.551502401 +0000 UTC m=+0.030818184 container create 533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 26 12:38:58 compute-0 sudo[99038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e
Nov 26 12:38:58 compute-0 sudo[99038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:58 compute-0 systemd[1]: Started libpod-conmon-533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729.scope.
Nov 26 12:38:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11851cf3f9d7d07410e752cc3170f1b9a6b56e693e7291e64f6f6f34b2b19bda/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11851cf3f9d7d07410e752cc3170f1b9a6b56e693e7291e64f6f6f34b2b19bda/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:58 compute-0 podman[99030]: 2025-11-26 12:38:58.609667554 +0000 UTC m=+0.088983348 container init 533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 26 12:38:58 compute-0 podman[99030]: 2025-11-26 12:38:58.614813481 +0000 UTC m=+0.094129255 container start 533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 12:38:58 compute-0 podman[99030]: 2025-11-26 12:38:58.616146282 +0000 UTC m=+0.095462076 container attach 533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 12:38:58 compute-0 podman[99030]: 2025-11-26 12:38:58.538618823 +0000 UTC m=+0.017934616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:58 compute-0 ceph-mon[74966]: 2.1a scrub starts
Nov 26 12:38:58 compute-0 ceph-mon[74966]: 2.1a scrub ok
Nov 26 12:38:58 compute-0 ceph-mon[74966]: pgmap v63: 131 pgs: 1 active+clean+scrubbing, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:58 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:58 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:58 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:58 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:58 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:58 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ipyiim", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 26 12:38:58 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ipyiim", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 26 12:38:58 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:38:58 compute-0 podman[99107]: 2025-11-26 12:38:58.826902563 +0000 UTC m=+0.026814939 container create ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:58 compute-0 systemd[1]: Started libpod-conmon-ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c.scope.
Nov 26 12:38:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:58 compute-0 podman[99107]: 2025-11-26 12:38:58.874802092 +0000 UTC m=+0.074714458 container init ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:58 compute-0 podman[99107]: 2025-11-26 12:38:58.879722905 +0000 UTC m=+0.079635271 container start ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:58 compute-0 podman[99107]: 2025-11-26 12:38:58.880783624 +0000 UTC m=+0.080696010 container attach ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 12:38:58 compute-0 systemd[1]: libpod-ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c.scope: Deactivated successfully.
Nov 26 12:38:58 compute-0 gifted_babbage[99120]: 167 167
Nov 26 12:38:58 compute-0 conmon[99120]: conmon ca6aac2d045fb2c3094f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c.scope/container/memory.events
Nov 26 12:38:58 compute-0 podman[99107]: 2025-11-26 12:38:58.883830896 +0000 UTC m=+0.083743262 container died ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 12:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-35ad0c431e076e2ce868147d17ea21ae61a704ba3f73950772dbf1d120ca1a4a-merged.mount: Deactivated successfully.
Nov 26 12:38:58 compute-0 podman[99107]: 2025-11-26 12:38:58.903358463 +0000 UTC m=+0.103270829 container remove ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:38:58 compute-0 podman[99107]: 2025-11-26 12:38:58.816004797 +0000 UTC m=+0.015917163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:58 compute-0 systemd[1]: libpod-conmon-ca6aac2d045fb2c3094fc8a0c377bed0233e4cba44b78301aa598ac3cc2cd01c.scope: Deactivated successfully.
Nov 26 12:38:58 compute-0 systemd[1]: Reloading.
Nov 26 12:38:58 compute-0 systemd-rc-local-generator[99178]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:38:59 compute-0 systemd-sysv-generator[99181]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:38:59 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 12:38:59 compute-0 vigorous_almeida[99069]: 
Nov 26 12:38:59 compute-0 vigorous_almeida[99069]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 12:38:59 compute-0 podman[99030]: 2025-11-26 12:38:59.126597114 +0000 UTC m=+0.605912887 container died 533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:59 compute-0 systemd[1]: libpod-533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729.scope: Deactivated successfully.
Nov 26 12:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-11851cf3f9d7d07410e752cc3170f1b9a6b56e693e7291e64f6f6f34b2b19bda-merged.mount: Deactivated successfully.
Nov 26 12:38:59 compute-0 podman[99030]: 2025-11-26 12:38:59.157933202 +0000 UTC m=+0.637248975 container remove 533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:59 compute-0 systemd[1]: libpod-conmon-533960731c8228a265d024c1270f05ae9154905213ead3f6f23c17d0152e4729.scope: Deactivated successfully.
Nov 26 12:38:59 compute-0 sudo[98901]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:59 compute-0 systemd[1]: Reloading.
Nov 26 12:38:59 compute-0 systemd-rc-local-generator[99234]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:38:59 compute-0 systemd-sysv-generator[99239]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:38:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 26 12:38:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 26 12:38:59 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 26 12:38:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 26 12:38:59 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 26 12:38:59 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.ipyiim for f7d7fe93-41e5-51c4-b72d-63b38686102e...
Nov 26 12:38:59 compute-0 podman[99284]: 2025-11-26 12:38:59.577286893 +0000 UTC m=+0.030201290 container create bc6bc48477a30b6c2763c5b823f3b844742d755476240bb0c5066a188454d173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mds-cephfs-compute-0-ipyiim, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1420a31f2086711cacd6bbcfc1c5a7288163b0bd526cce5e149d1b089e340b3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1420a31f2086711cacd6bbcfc1c5a7288163b0bd526cce5e149d1b089e340b3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1420a31f2086711cacd6bbcfc1c5a7288163b0bd526cce5e149d1b089e340b3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1420a31f2086711cacd6bbcfc1c5a7288163b0bd526cce5e149d1b089e340b3c/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.ipyiim supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:59 compute-0 podman[99284]: 2025-11-26 12:38:59.617888001 +0000 UTC m=+0.070802398 container init bc6bc48477a30b6c2763c5b823f3b844742d755476240bb0c5066a188454d173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mds-cephfs-compute-0-ipyiim, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:38:59 compute-0 podman[99284]: 2025-11-26 12:38:59.624348926 +0000 UTC m=+0.077263313 container start bc6bc48477a30b6c2763c5b823f3b844742d755476240bb0c5066a188454d173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mds-cephfs-compute-0-ipyiim, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 12:38:59 compute-0 bash[99284]: bc6bc48477a30b6c2763c5b823f3b844742d755476240bb0c5066a188454d173
Nov 26 12:38:59 compute-0 podman[99284]: 2025-11-26 12:38:59.565221969 +0000 UTC m=+0.018136366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:38:59 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.ipyiim for f7d7fe93-41e5-51c4-b72d-63b38686102e.
Nov 26 12:38:59 compute-0 ceph-mon[74966]: Saving service rgw.rgw spec with placement compute-0
Nov 26 12:38:59 compute-0 ceph-mon[74966]: Deploying daemon mds.cephfs.compute-0.ipyiim on compute-0
Nov 26 12:38:59 compute-0 ceph-mon[74966]: from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 12:38:59 compute-0 ceph-mon[74966]: osdmap e32: 3 total, 3 up, 3 in
Nov 26 12:38:59 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 26 12:38:59 compute-0 sudo[99038]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:38:59 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:38:59 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:59 compute-0 ceph-mds[99300]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 12:38:59 compute-0 ceph-mds[99300]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 26 12:38:59 compute-0 ceph-mds[99300]: main not setting numa affinity
Nov 26 12:38:59 compute-0 ceph-mds[99300]: pidfile_write: ignore empty --pid-file
Nov 26 12:38:59 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mds-cephfs-compute-0-ipyiim[99296]: starting mds.cephfs.compute-0.ipyiim at 
Nov 26 12:38:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 12:38:59 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:59 compute-0 ceph-mgr[75236]: [progress INFO root] complete: finished ev 13887e53-0170-459f-8503-4b4ba35e9b94 (Updating mds.cephfs deployment (+1 -> 1))
Nov 26 12:38:59 compute-0 ceph-mgr[75236]: [progress INFO root] Completed event 13887e53-0170-459f-8503-4b4ba35e9b94 (Updating mds.cephfs deployment (+1 -> 1)) in 1 seconds
Nov 26 12:38:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 26 12:38:59 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 12:38:59 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:38:59 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim Updating MDS map to version 2 from mon.0
Nov 26 12:38:59 compute-0 sudo[99350]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaazjacjzqqpagcmmufruefliniamiga ; /usr/bin/python3'
Nov 26 12:38:59 compute-0 sudo[99350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:38:59 compute-0 sudo[99337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:59 compute-0 sudo[99337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:59 compute-0 sudo[99337]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:59 compute-0 sudo[99370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:38:59 compute-0 sudo[99370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:59 compute-0 sudo[99370]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:59 compute-0 sudo[99395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:59 compute-0 sudo[99395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:59 compute-0 sudo[99395]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:59 compute-0 python3[99367]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:38:59 compute-0 sudo[99420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:38:59 compute-0 sudo[99420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:59 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v65: 132 pgs: 1 unknown, 1 active+clean+scrubbing, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:38:59 compute-0 sudo[99420]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:59 compute-0 podman[99440]: 2025-11-26 12:38:59.877744233 +0000 UTC m=+0.034178094 container create 7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b (image=quay.io/ceph/ceph:v18, name=wonderful_buck, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 12:38:59 compute-0 systemd[1]: Started libpod-conmon-7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b.scope.
Nov 26 12:38:59 compute-0 sudo[99453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:38:59 compute-0 sudo[99453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:38:59 compute-0 sudo[99453]: pam_unix(sudo:session): session closed for user root
Nov 26 12:38:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e0044fee49a7dfe64b6c2bffc5cc85a8233d4bd9a2ca964972127cf4822413/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e0044fee49a7dfe64b6c2bffc5cc85a8233d4bd9a2ca964972127cf4822413/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:38:59 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 12:38:59 compute-0 podman[99440]: 2025-11-26 12:38:59.936281678 +0000 UTC m=+0.092715560 container init 7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b (image=quay.io/ceph/ceph:v18, name=wonderful_buck, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:38:59 compute-0 podman[99440]: 2025-11-26 12:38:59.941639204 +0000 UTC m=+0.098073066 container start 7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b (image=quay.io/ceph/ceph:v18, name=wonderful_buck, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:38:59 compute-0 podman[99440]: 2025-11-26 12:38:59.943036075 +0000 UTC m=+0.099469937 container attach 7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b (image=quay.io/ceph/ceph:v18, name=wonderful_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 12:38:59 compute-0 podman[99440]: 2025-11-26 12:38:59.864910789 +0000 UTC m=+0.021344671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:38:59 compute-0 sudo[99485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 12:38:59 compute-0 sudo[99485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:00 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 26 12:39:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:00 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 26 12:39:00 compute-0 podman[99586]: 2025-11-26 12:39:00.30989548 +0000 UTC m=+0.038871099 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:39:00 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 12:39:00 compute-0 wonderful_buck[99481]: 
Nov 26 12:39:00 compute-0 wonderful_buck[99481]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 26 12:39:00 compute-0 podman[99586]: 2025-11-26 12:39:00.392053955 +0000 UTC m=+0.121029555 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 26 12:39:00 compute-0 systemd[1]: libpod-7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b.scope: Deactivated successfully.
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 26 12:39:00 compute-0 podman[99440]: 2025-11-26 12:39:00.399170135 +0000 UTC m=+0.555603997 container died 7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b (image=quay.io/ceph/ceph:v18, name=wonderful_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 12:39:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-17e0044fee49a7dfe64b6c2bffc5cc85a8233d4bd9a2ca964972127cf4822413-merged.mount: Deactivated successfully.
Nov 26 12:39:00 compute-0 podman[99440]: 2025-11-26 12:39:00.428265791 +0000 UTC m=+0.584699653 container remove 7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b (image=quay.io/ceph/ceph:v18, name=wonderful_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 12:39:00 compute-0 systemd[1]: libpod-conmon-7806e4fc1f0635b7be116012249cc87fce7e472c4bec53533a6518988f30472b.scope: Deactivated successfully.
Nov 26 12:39:00 compute-0 sudo[99350]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:00 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:00 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:00 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:00 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:00 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:00 compute-0 ceph-mon[74966]: pgmap v65: 132 pgs: 1 unknown, 1 active+clean+scrubbing, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:00 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 26 12:39:00 compute-0 ceph-mon[74966]: osdmap e33: 3 total, 3 up, 3 in
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).mds e3 new map
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-26T12:38:49.414687+0000
                                           modified        2025-11-26T12:38:49.414741+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ipyiim{-1:14265} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.100:6814/1310645866,v1:192.168.122.100:6815/1310645866] compat {c=[1],r=[1],i=[7ff]}]
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim Updating MDS map to version 3 from mon.0
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim Monitors have assigned me to become a standby.
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1310645866,v1:192.168.122.100:6815/1310645866] up:boot
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/1310645866,v1:192.168.122.100:6815/1310645866] as mds.0
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.ipyiim assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ipyiim"} v 0) v1
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ipyiim"}]: dispatch
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).mds e3 all = 0
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).mds e4 new map
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-26T12:38:49.414687+0000
                                           modified        2025-11-26T12:39:00.680673+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14265}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.ipyiim{0:14265} state up:creating seq 1 join_fscid=1 addr [v2:192.168.122.100:6814/1310645866,v1:192.168.122.100:6815/1310645866] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim Updating MDS map to version 4 from mon.0
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.ipyiim=up:creating}
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x1
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x100
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x600
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x601
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x602
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x603
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x604
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x605
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x606
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x607
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x608
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.0.cache creating system inode with ino:0x609
Nov 26 12:39:00 compute-0 ceph-mds[99300]: mds.0.4 creating_done
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.ipyiim is now active in filesystem cephfs as rank 0
Nov 26 12:39:00 compute-0 sudo[99485]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:00 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 9f3fcf6d-08b7-4fa5-b928-de72eb959739 does not exist
Nov 26 12:39:00 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev d8103df1-b2c0-4cf6-b2b7-65f22ebeda99 does not exist
Nov 26 12:39:00 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 6529d20d-66d3-4946-a386-76f2e70990b0 does not exist
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:39:00 compute-0 sudo[99739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:00 compute-0 sudo[99739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:00 compute-0 sudo[99739]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:00 compute-0 ceph-mgr[75236]: [progress INFO root] Writing back 9 completed events
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 12:39:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:00 compute-0 sudo[99764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:39:00 compute-0 sudo[99764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:00 compute-0 sudo[99764]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:39:00 compute-0 sudo[99789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:00 compute-0 sudo[99789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:00 compute-0 sudo[99789]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:01 compute-0 sudo[99814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:39:01 compute-0 sudo[99814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:01 compute-0 sudo[99862]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbmmnoutlavdlqwrnckbinkpkrxrsccs ; /usr/bin/python3'
Nov 26 12:39:01 compute-0 sudo[99862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:39:01 compute-0 python3[99864]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:39:01 compute-0 podman[99893]: 2025-11-26 12:39:01.241800681 +0000 UTC m=+0.032694438 container create b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea (image=quay.io/ceph/ceph:v18, name=heuristic_haslett, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:39:01 compute-0 podman[99905]: 2025-11-26 12:39:01.268339138 +0000 UTC m=+0.034071734 container create e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:39:01 compute-0 systemd[1]: Started libpod-conmon-b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea.scope.
Nov 26 12:39:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:01 compute-0 systemd[1]: Started libpod-conmon-e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b.scope.
Nov 26 12:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b64db318c664053bd5fb4eadd45955761cee1de256374dfd64da8a19c4af38d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b64db318c664053bd5fb4eadd45955761cee1de256374dfd64da8a19c4af38d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:01 compute-0 podman[99893]: 2025-11-26 12:39:01.299036824 +0000 UTC m=+0.089930582 container init b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea (image=quay.io/ceph/ceph:v18, name=heuristic_haslett, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 12:39:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:01 compute-0 podman[99893]: 2025-11-26 12:39:01.306562725 +0000 UTC m=+0.097456483 container start b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea (image=quay.io/ceph/ceph:v18, name=heuristic_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 12:39:01 compute-0 podman[99893]: 2025-11-26 12:39:01.308509483 +0000 UTC m=+0.099403251 container attach b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea (image=quay.io/ceph/ceph:v18, name=heuristic_haslett, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:39:01 compute-0 podman[99905]: 2025-11-26 12:39:01.310188267 +0000 UTC m=+0.075920853 container init e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shaw, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:39:01 compute-0 podman[99905]: 2025-11-26 12:39:01.314532072 +0000 UTC m=+0.080264658 container start e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shaw, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 12:39:01 compute-0 podman[99905]: 2025-11-26 12:39:01.31599583 +0000 UTC m=+0.081728417 container attach e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shaw, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:39:01 compute-0 reverent_shaw[99924]: 167 167
Nov 26 12:39:01 compute-0 systemd[1]: libpod-e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b.scope: Deactivated successfully.
Nov 26 12:39:01 compute-0 conmon[99924]: conmon e8305df17682862bff5d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b.scope/container/memory.events
Nov 26 12:39:01 compute-0 podman[99905]: 2025-11-26 12:39:01.318735322 +0000 UTC m=+0.084467908 container died e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 12:39:01 compute-0 podman[99893]: 2025-11-26 12:39:01.225883496 +0000 UTC m=+0.016777274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:39:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d583b8691a968f670fde82d7e363244f10ad073038a305eb59f19c720036e441-merged.mount: Deactivated successfully.
Nov 26 12:39:01 compute-0 podman[99905]: 2025-11-26 12:39:01.340501608 +0000 UTC m=+0.106234194 container remove e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:39:01 compute-0 podman[99905]: 2025-11-26 12:39:01.257435753 +0000 UTC m=+0.023168358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:39:01 compute-0 systemd[1]: libpod-conmon-e8305df17682862bff5d0a95e1084475ccbc65640e6fee09967cdc1db74bb26b.scope: Deactivated successfully.
Nov 26 12:39:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 26 12:39:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 26 12:39:01 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 26 12:39:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 26 12:39:01 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 26 12:39:01 compute-0 ansible-async_wrapper.py[98388]: Done in kid B.
Nov 26 12:39:01 compute-0 podman[99947]: 2025-11-26 12:39:01.462008601 +0000 UTC m=+0.028845787 container create c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 12:39:01 compute-0 systemd[1]: Started libpod-conmon-c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029.scope.
Nov 26 12:39:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8c3dde29f5f3f52bf5e94b380c06ad774a7e538bda269e4e6f3cb40c25568/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8c3dde29f5f3f52bf5e94b380c06ad774a7e538bda269e4e6f3cb40c25568/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8c3dde29f5f3f52bf5e94b380c06ad774a7e538bda269e4e6f3cb40c25568/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8c3dde29f5f3f52bf5e94b380c06ad774a7e538bda269e4e6f3cb40c25568/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8c3dde29f5f3f52bf5e94b380c06ad774a7e538bda269e4e6f3cb40c25568/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:01 compute-0 podman[99947]: 2025-11-26 12:39:01.529617199 +0000 UTC m=+0.096454395 container init c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:39:01 compute-0 podman[99947]: 2025-11-26 12:39:01.534874445 +0000 UTC m=+0.101711621 container start c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 12:39:01 compute-0 podman[99947]: 2025-11-26 12:39:01.536340037 +0000 UTC m=+0.103177224 container attach c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:39:01 compute-0 podman[99947]: 2025-11-26 12:39:01.449718822 +0000 UTC m=+0.016556019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:39:01 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 26 12:39:01 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 26 12:39:01 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 26 12:39:01 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 26 12:39:01 compute-0 ceph-mon[74966]: 3.14 scrub starts
Nov 26 12:39:01 compute-0 ceph-mon[74966]: 3.14 scrub ok
Nov 26 12:39:01 compute-0 ceph-mon[74966]: from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 12:39:01 compute-0 ceph-mon[74966]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 12:39:01 compute-0 ceph-mon[74966]: mds.? [v2:192.168.122.100:6814/1310645866,v1:192.168.122.100:6815/1310645866] up:boot
Nov 26 12:39:01 compute-0 ceph-mon[74966]: daemon mds.cephfs.compute-0.ipyiim assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 26 12:39:01 compute-0 ceph-mon[74966]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 26 12:39:01 compute-0 ceph-mon[74966]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 26 12:39:01 compute-0 ceph-mon[74966]: Cluster is now healthy
Nov 26 12:39:01 compute-0 ceph-mon[74966]: fsmap cephfs:0 1 up:standby
Nov 26 12:39:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ipyiim"}]: dispatch
Nov 26 12:39:01 compute-0 ceph-mon[74966]: fsmap cephfs:1 {0=cephfs.compute-0.ipyiim=up:creating}
Nov 26 12:39:01 compute-0 ceph-mon[74966]: daemon mds.cephfs.compute-0.ipyiim is now active in filesystem cephfs as rank 0
Nov 26 12:39:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:39:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:39:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:39:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:39:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:39:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:01 compute-0 ceph-mon[74966]: osdmap e34: 3 total, 3 up, 3 in
Nov 26 12:39:01 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 26 12:39:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).mds e5 new map
Nov 26 12:39:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-26T12:38:49.414687+0000
                                           modified        2025-11-26T12:39:01.682081+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14265}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.ipyiim{0:14265} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1310645866,v1:192.168.122.100:6815/1310645866] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 26 12:39:01 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim Updating MDS map to version 5 from mon.0
Nov 26 12:39:01 compute-0 ceph-mds[99300]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 26 12:39:01 compute-0 ceph-mds[99300]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 26 12:39:01 compute-0 ceph-mds[99300]: mds.0.4 recovery_done -- successful recovery!
Nov 26 12:39:01 compute-0 ceph-mds[99300]: mds.0.4 active_start
Nov 26 12:39:01 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1310645866,v1:192.168.122.100:6815/1310645866] up:active
Nov 26 12:39:01 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.ipyiim=up:active}
Nov 26 12:39:01 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 12:39:01 compute-0 heuristic_haslett[99918]: 
Nov 26 12:39:01 compute-0 heuristic_haslett[99918]: [{"container_id": "3e7332a87e08", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.49%", "created": "2025-11-26T12:37:54.342893Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-26T12:37:54.378128Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T12:39:00.825715Z", "memory_usage": 11639193, "ports": [], "service_name": "crash", "started": "2025-11-26T12:37:54.272378Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@crash.compute-0", "version": "18.2.7"}, {"container_id": "bc6bc48477a3", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "7.80%", "created": "2025-11-26T12:38:59.630860Z", "daemon_id": "cephfs.compute-0.ipyiim", "daemon_name": "mds.cephfs.compute-0.ipyiim", "daemon_type": "mds", "events": ["2025-11-26T12:38:59.661416Z daemon:mds.cephfs.compute-0.ipyiim [INFO] \"Deployed mds.cephfs.compute-0.ipyiim on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T12:39:00.826002Z", "memory_usage": 12729712, "ports": [], "service_name": "mds.cephfs", "started": "2025-11-26T12:38:59.568805Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@mds.cephfs.compute-0.ipyiim", "version": "18.2.7"}, {"container_id": "c06d21624ca8", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "31.64%", "created": "2025-11-26T12:36:57.403651Z", "daemon_id": "compute-0.whkbdn", "daemon_name": "mgr.compute-0.whkbdn", "daemon_type": "mgr", "events": ["2025-11-26T12:37:57.682828Z daemon:mgr.compute-0.whkbdn [INFO] \"Reconfigured mgr.compute-0.whkbdn on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T12:39:00.825659Z", "memory_usage": 548719820, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-26T12:36:57.345673Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@mgr.compute-0.whkbdn", "version": "18.2.7"}, {"container_id": "ba65664ab41f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.71%", "created": "2025-11-26T12:36:53.883618Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-26T12:37:57.142211Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T12:39:00.825575Z", "memory_request": 2147483648, "memory_usage": 38252052, "ports": [], "service_name": "mon", "started": "2025-11-26T12:36:55.819213Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@mon.compute-0", "version": "18.2.7"}, {"container_id": "9981961b7997", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.17%", "created": "2025-11-26T12:38:15.569150Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-26T12:38:15.599429Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T12:39:00.825791Z", "memory_request": 4294967296, "memory_usage": 59653488, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-26T12:38:15.504837Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@osd.0", "version": "18.2.7"}, {"container_id": "7fe95a8b384c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.34%", "created": "2025-11-26T12:38:18.844206Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-26T12:38:18.932341Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T12:39:00.825848Z", "memory_request": 4294967296, "memory_usage": 63491276, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-26T12:38:18.695428Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@osd.1", "version": "18.2.7"}, {"container_id": "fad0efe7fb69", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.41%", "created": "2025-11-26T12:38:22.364266Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-26T12:38:22.445960Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T12:39:00.825898Z", "memory_request": 4294967296, "memory_usage": 64088965, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-26T12:38:22.211172Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@osd.2", "version": "18.2.7"}, {"container_id": "24825469580c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.83%", "created": "2025-11-26T12:38:58.342848Z", "daemon_id": "rgw.compute-0.cpfqrx", "daemon_name": "rgw.rgw.compute-0.cpfqrx", "daemon_type": "rgw", "events": ["2025-11-26T12:38:58.379101Z daemon:rgw.rgw.compute-0.cpfqrx [INFO] \"Deployed rgw.rgw.compute-0.cpfqrx on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2025-11-26T12:39:00.825951Z", "memory_usage": 20887633, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-11-26T12:38:58.284447Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e@rgw.rgw.compute-0.cpfqrx", "version": "18.2.7"}]
Nov 26 12:39:01 compute-0 systemd[1]: libpod-b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea.scope: Deactivated successfully.
Nov 26 12:39:01 compute-0 podman[99893]: 2025-11-26 12:39:01.767387012 +0000 UTC m=+0.558280780 container died b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea (image=quay.io/ceph/ceph:v18, name=heuristic_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 12:39:01 compute-0 rsyslogd[962]: message too long (8588) with configured size 8096, begin of message is: [{"container_id": "3e7332a87e08", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 12:39:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b64db318c664053bd5fb4eadd45955761cee1de256374dfd64da8a19c4af38d-merged.mount: Deactivated successfully.
Nov 26 12:39:01 compute-0 podman[99893]: 2025-11-26 12:39:01.790498422 +0000 UTC m=+0.581392180 container remove b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea (image=quay.io/ceph/ceph:v18, name=heuristic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 12:39:01 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:01 compute-0 sudo[99862]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:01 compute-0 systemd[1]: libpod-conmon-b7eefb170f55b1f333de8e9bb1f9f110b42c19f3bf12f75e8db2dbbc737aa5ea.scope: Deactivated successfully.
Nov 26 12:39:01 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v68: 133 pgs: 1 active+clean+scrubbing, 2 unknown, 130 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s wr, 7 op/s
Nov 26 12:39:02 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 26 12:39:02 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 26 12:39:02 compute-0 elated_bassi[99960]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:39:02 compute-0 elated_bassi[99960]: --> relative data size: 1.0
Nov 26 12:39:02 compute-0 elated_bassi[99960]: --> All data devices are unavailable
Nov 26 12:39:02 compute-0 systemd[1]: libpod-c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029.scope: Deactivated successfully.
Nov 26 12:39:02 compute-0 podman[99947]: 2025-11-26 12:39:02.366210368 +0000 UTC m=+0.933047545 container died c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bassi, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 12:39:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fe8c3dde29f5f3f52bf5e94b380c06ad774a7e538bda269e4e6f3cb40c25568-merged.mount: Deactivated successfully.
Nov 26 12:39:02 compute-0 podman[99947]: 2025-11-26 12:39:02.396674514 +0000 UTC m=+0.963511689 container remove c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bassi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 12:39:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 26 12:39:02 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 26 12:39:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 26 12:39:02 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 26 12:39:02 compute-0 systemd[1]: libpod-conmon-c6eb4f421bebcb3e4b69ff9652dd1b0286cf1b5905f07e034ee5fee7eaf11029.scope: Deactivated successfully.
Nov 26 12:39:02 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:02 compute-0 sudo[99814]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:02 compute-0 sudo[100034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:02 compute-0 sudo[100034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:02 compute-0 sudo[100034]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:02 compute-0 sudo[100059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:39:02 compute-0 sudo[100059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:02 compute-0 sudo[100059]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:02 compute-0 sudo[100084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:02 compute-0 sudo[100130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wefoxmncpmevoekncjdbagkhkkfkaldo ; /usr/bin/python3'
Nov 26 12:39:02 compute-0 sudo[100084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:02 compute-0 sudo[100130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:39:02 compute-0 sudo[100084]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:02 compute-0 sudo[100135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:39:02 compute-0 sudo[100135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:02 compute-0 python3[100134]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:39:02 compute-0 ceph-mon[74966]: 4.19 scrub starts
Nov 26 12:39:02 compute-0 ceph-mon[74966]: 4.19 scrub ok
Nov 26 12:39:02 compute-0 ceph-mon[74966]: 2.1e scrub starts
Nov 26 12:39:02 compute-0 ceph-mon[74966]: 2.1e scrub ok
Nov 26 12:39:02 compute-0 ceph-mon[74966]: mds.? [v2:192.168.122.100:6814/1310645866,v1:192.168.122.100:6815/1310645866] up:active
Nov 26 12:39:02 compute-0 ceph-mon[74966]: fsmap cephfs:1 {0=cephfs.compute-0.ipyiim=up:active}
Nov 26 12:39:02 compute-0 ceph-mon[74966]: from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 12:39:02 compute-0 ceph-mon[74966]: pgmap v68: 133 pgs: 1 active+clean+scrubbing, 2 unknown, 130 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s wr, 7 op/s
Nov 26 12:39:02 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 26 12:39:02 compute-0 ceph-mon[74966]: osdmap e35: 3 total, 3 up, 3 in
Nov 26 12:39:02 compute-0 podman[100160]: 2025-11-26 12:39:02.713166097 +0000 UTC m=+0.034171332 container create b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075 (image=quay.io/ceph/ceph:v18, name=modest_sammet, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:39:02 compute-0 systemd[1]: Started libpod-conmon-b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075.scope.
Nov 26 12:39:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c820306a69867f3dc7b5b5a3560983b40aad47d6409ef2e3d5656586f9c9ddea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c820306a69867f3dc7b5b5a3560983b40aad47d6409ef2e3d5656586f9c9ddea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:02 compute-0 podman[100160]: 2025-11-26 12:39:02.761430904 +0000 UTC m=+0.082436140 container init b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075 (image=quay.io/ceph/ceph:v18, name=modest_sammet, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:39:02 compute-0 podman[100160]: 2025-11-26 12:39:02.767816016 +0000 UTC m=+0.088821251 container start b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075 (image=quay.io/ceph/ceph:v18, name=modest_sammet, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:39:02 compute-0 podman[100160]: 2025-11-26 12:39:02.769004215 +0000 UTC m=+0.090009480 container attach b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075 (image=quay.io/ceph/ceph:v18, name=modest_sammet, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:39:02 compute-0 podman[100160]: 2025-11-26 12:39:02.701006734 +0000 UTC m=+0.022011989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:39:02 compute-0 podman[100206]: 2025-11-26 12:39:02.850499752 +0000 UTC m=+0.028380880 container create 279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_beaver, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:39:02 compute-0 systemd[1]: Started libpod-conmon-279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d.scope.
Nov 26 12:39:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:02 compute-0 podman[100206]: 2025-11-26 12:39:02.894674444 +0000 UTC m=+0.072555571 container init 279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_beaver, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 12:39:02 compute-0 podman[100206]: 2025-11-26 12:39:02.899804149 +0000 UTC m=+0.077685287 container start 279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_beaver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 12:39:02 compute-0 podman[100206]: 2025-11-26 12:39:02.900888372 +0000 UTC m=+0.078769510 container attach 279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_beaver, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 12:39:02 compute-0 systemd[1]: libpod-279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d.scope: Deactivated successfully.
Nov 26 12:39:02 compute-0 great_beaver[100219]: 167 167
Nov 26 12:39:02 compute-0 conmon[100219]: conmon 279cbca14096f9ee3abd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d.scope/container/memory.events
Nov 26 12:39:02 compute-0 podman[100206]: 2025-11-26 12:39:02.903880841 +0000 UTC m=+0.081761979 container died 279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_beaver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:39:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-688167e4ad44c393dcd75ed6cdde94ce62b985be8b7058815c79def1a8cadf01-merged.mount: Deactivated successfully.
Nov 26 12:39:02 compute-0 podman[100206]: 2025-11-26 12:39:02.925105546 +0000 UTC m=+0.102986684 container remove 279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:39:02 compute-0 podman[100206]: 2025-11-26 12:39:02.837548087 +0000 UTC m=+0.015429245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:39:02 compute-0 systemd[1]: libpod-conmon-279cbca14096f9ee3abdb16fa2c46f42130b603b43eec4c06b54076e200ebc2d.scope: Deactivated successfully.
Nov 26 12:39:03 compute-0 podman[100241]: 2025-11-26 12:39:03.036398191 +0000 UTC m=+0.028816092 container create df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 12:39:03 compute-0 systemd[1]: Started libpod-conmon-df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4.scope.
Nov 26 12:39:03 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea2c205db8c9fb184e1f3fd0749663b7ff4a66256ba4bad365d709a841505bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea2c205db8c9fb184e1f3fd0749663b7ff4a66256ba4bad365d709a841505bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea2c205db8c9fb184e1f3fd0749663b7ff4a66256ba4bad365d709a841505bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea2c205db8c9fb184e1f3fd0749663b7ff4a66256ba4bad365d709a841505bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:03 compute-0 podman[100241]: 2025-11-26 12:39:03.095997917 +0000 UTC m=+0.088415818 container init df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 12:39:03 compute-0 podman[100241]: 2025-11-26 12:39:03.101304557 +0000 UTC m=+0.093722457 container start df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 12:39:03 compute-0 podman[100241]: 2025-11-26 12:39:03.104821405 +0000 UTC m=+0.097239325 container attach df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 12:39:03 compute-0 podman[100241]: 2025-11-26 12:39:03.025437006 +0000 UTC m=+0.017854926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:39:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 12:39:03 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2324126989' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 12:39:03 compute-0 modest_sammet[100184]: 
Nov 26 12:39:03 compute-0 modest_sammet[100184]: {"fsid":"f7d7fe93-41e5-51c4-b72d-63b38686102e","health":{"status":"HEALTH_WARN","checks":{"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":127,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":35,"num_osds":3,"num_up_osds":3,"osd_up_since":1764160707,"num_in_osds":3,"osd_in_since":1764160688,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":130},{"state_name":"unknown","count":2},{"state_name":"active+clean+scrubbing","count":1}],"num_pgs":133,"num_pools":9,"num_objects":23,"data_bytes":461642,"bytes_used":83861504,"bytes_avail":64328065024,"bytes_total":64411926528,"unknown_pgs_ratio":0.015037594363093376,"write_bytes_sec":2388,"read_op_per_sec":0,"write_op_per_sec":7},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.ipyiim","status":"up:active","gid":14265}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-26T12:38:37.846513+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 26 12:39:03 compute-0 systemd[1]: libpod-b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075.scope: Deactivated successfully.
Nov 26 12:39:03 compute-0 podman[100160]: 2025-11-26 12:39:03.27033411 +0000 UTC m=+0.591339355 container died b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075 (image=quay.io/ceph/ceph:v18, name=modest_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 12:39:03 compute-0 podman[100160]: 2025-11-26 12:39:03.292469671 +0000 UTC m=+0.613474906 container remove b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075 (image=quay.io/ceph/ceph:v18, name=modest_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 12:39:03 compute-0 systemd[1]: libpod-conmon-b90f225df4f8fb45d1b27f642f3341031d70849db70cec9688f233fbf28a2075.scope: Deactivated successfully.
Nov 26 12:39:03 compute-0 sudo[100130]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c820306a69867f3dc7b5b5a3560983b40aad47d6409ef2e3d5656586f9c9ddea-merged.mount: Deactivated successfully.
Nov 26 12:39:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 26 12:39:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 26 12:39:03 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 26 12:39:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 26 12:39:03 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 26 12:39:03 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:03 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 26 12:39:03 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 26 12:39:03 compute-0 ceph-mon[74966]: 3.19 scrub starts
Nov 26 12:39:03 compute-0 ceph-mon[74966]: 3.19 scrub ok
Nov 26 12:39:03 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2324126989' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 12:39:03 compute-0 ceph-mon[74966]: osdmap e36: 3 total, 3 up, 3 in
Nov 26 12:39:03 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 26 12:39:03 compute-0 distracted_kalam[100273]: {
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:     "0": [
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:         {
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "devices": [
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "/dev/loop3"
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             ],
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "lv_name": "ceph_lv0",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "lv_size": "21470642176",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "name": "ceph_lv0",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "tags": {
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.cluster_name": "ceph",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.crush_device_class": "",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.encrypted": "0",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.osd_id": "0",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.type": "block",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.vdo": "0"
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             },
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "type": "block",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "vg_name": "ceph_vg0"
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:         }
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:     ],
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:     "1": [
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:         {
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "devices": [
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "/dev/loop4"
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             ],
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "lv_name": "ceph_lv1",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "lv_size": "21470642176",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "name": "ceph_lv1",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "tags": {
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.cluster_name": "ceph",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.crush_device_class": "",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.encrypted": "0",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.osd_id": "1",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.type": "block",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.vdo": "0"
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             },
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "type": "block",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "vg_name": "ceph_vg1"
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:         }
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:     ],
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:     "2": [
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:         {
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "devices": [
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "/dev/loop5"
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             ],
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "lv_name": "ceph_lv2",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "lv_size": "21470642176",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "name": "ceph_lv2",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "tags": {
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.cluster_name": "ceph",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.crush_device_class": "",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.encrypted": "0",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.osd_id": "2",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.type": "block",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:                 "ceph.vdo": "0"
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             },
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "type": "block",
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:             "vg_name": "ceph_vg2"
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:         }
Nov 26 12:39:03 compute-0 distracted_kalam[100273]:     ]
Nov 26 12:39:03 compute-0 distracted_kalam[100273]: }
Nov 26 12:39:03 compute-0 systemd[1]: libpod-df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4.scope: Deactivated successfully.
Nov 26 12:39:03 compute-0 podman[100293]: 2025-11-26 12:39:03.778003779 +0000 UTC m=+0.017976734 container died df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 12:39:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ea2c205db8c9fb184e1f3fd0749663b7ff4a66256ba4bad365d709a841505bc-merged.mount: Deactivated successfully.
Nov 26 12:39:03 compute-0 podman[100293]: 2025-11-26 12:39:03.806558116 +0000 UTC m=+0.046531071 container remove df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 12:39:03 compute-0 systemd[1]: libpod-conmon-df151ad18e7967e9e0032dd4afe9cf67e3bd143ab39904db5f89f5b3a45eb6b4.scope: Deactivated successfully.
Nov 26 12:39:03 compute-0 sudo[100135]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:03 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v71: 134 pgs: 1 creating+peering, 1 active+clean+scrubbing, 2 unknown, 130 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Nov 26 12:39:03 compute-0 sudo[100305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:03 compute-0 sudo[100305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:03 compute-0 sudo[100305]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:03 compute-0 sudo[100330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:39:03 compute-0 sudo[100330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:03 compute-0 sudo[100330]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:03 compute-0 sudo[100355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:03 compute-0 sudo[100355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:03 compute-0 sudo[100355]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:03 compute-0 sudo[100380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:39:04 compute-0 sudo[100380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:04 compute-0 sudo[100428]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irqfcjnrvjwgjgebzecnpzppxlupmqbt ; /usr/bin/python3'
Nov 26 12:39:04 compute-0 sudo[100428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:39:04 compute-0 python3[100430]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:39:04 compute-0 podman[100453]: 2025-11-26 12:39:04.20829393 +0000 UTC m=+0.032955862 container create e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545 (image=quay.io/ceph/ceph:v18, name=practical_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 26 12:39:04 compute-0 systemd[1]: Started libpod-conmon-e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545.scope.
Nov 26 12:39:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6f5d1da2f4dcd6ae084a199cc6176351390cadd2734807b5dac107bc7cd43f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6f5d1da2f4dcd6ae084a199cc6176351390cadd2734807b5dac107bc7cd43f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:04 compute-0 podman[100453]: 2025-11-26 12:39:04.257971461 +0000 UTC m=+0.082633414 container init e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545 (image=quay.io/ceph/ceph:v18, name=practical_mccarthy, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:39:04 compute-0 podman[100453]: 2025-11-26 12:39:04.262825757 +0000 UTC m=+0.087487690 container start e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545 (image=quay.io/ceph/ceph:v18, name=practical_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:39:04 compute-0 podman[100453]: 2025-11-26 12:39:04.26402635 +0000 UTC m=+0.088688303 container attach e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545 (image=quay.io/ceph/ceph:v18, name=practical_mccarthy, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 12:39:04 compute-0 podman[100475]: 2025-11-26 12:39:04.266693795 +0000 UTC m=+0.029724653 container create 6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 12:39:04 compute-0 podman[100453]: 2025-11-26 12:39:04.192198069 +0000 UTC m=+0.016860023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:39:04 compute-0 systemd[1]: Started libpod-conmon-6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74.scope.
Nov 26 12:39:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:04 compute-0 podman[100475]: 2025-11-26 12:39:04.328729904 +0000 UTC m=+0.091760782 container init 6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:39:04 compute-0 podman[100475]: 2025-11-26 12:39:04.333621471 +0000 UTC m=+0.096652329 container start 6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:39:04 compute-0 podman[100475]: 2025-11-26 12:39:04.334858953 +0000 UTC m=+0.097889830 container attach 6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 12:39:04 compute-0 determined_leavitt[100493]: 167 167
Nov 26 12:39:04 compute-0 systemd[1]: libpod-6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74.scope: Deactivated successfully.
Nov 26 12:39:04 compute-0 conmon[100493]: conmon 6563b87f683c8f8f6234 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74.scope/container/memory.events
Nov 26 12:39:04 compute-0 podman[100475]: 2025-11-26 12:39:04.337950318 +0000 UTC m=+0.100981175 container died 6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 12:39:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1ecbbbb36c737d6d2bfc6ff2cb6567a8a6f18e15a055643d9e024ddb62072f1-merged.mount: Deactivated successfully.
Nov 26 12:39:04 compute-0 podman[100475]: 2025-11-26 12:39:04.254335248 +0000 UTC m=+0.017366125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:39:04 compute-0 podman[100475]: 2025-11-26 12:39:04.357652495 +0000 UTC m=+0.120683351 container remove 6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 12:39:04 compute-0 systemd[1]: libpod-conmon-6563b87f683c8f8f623477884c630749b73e48a4bf32e557772efc216c7d9f74.scope: Deactivated successfully.
Nov 26 12:39:04 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 26 12:39:04 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 26 12:39:04 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 26 12:39:04 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 26 12:39:04 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 37 pg[10.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:04 compute-0 podman[100515]: 2025-11-26 12:39:04.499744977 +0000 UTC m=+0.031411842 container create 6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:39:04 compute-0 systemd[1]: Started libpod-conmon-6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e.scope.
Nov 26 12:39:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e0f2721c0500418cde0276b1dda1e8b1a977a94309983a40be0e97c204d489/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e0f2721c0500418cde0276b1dda1e8b1a977a94309983a40be0e97c204d489/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e0f2721c0500418cde0276b1dda1e8b1a977a94309983a40be0e97c204d489/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e0f2721c0500418cde0276b1dda1e8b1a977a94309983a40be0e97c204d489/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:04 compute-0 podman[100515]: 2025-11-26 12:39:04.559258722 +0000 UTC m=+0.090925607 container init 6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 12:39:04 compute-0 podman[100515]: 2025-11-26 12:39:04.567028453 +0000 UTC m=+0.098695318 container start 6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 12:39:04 compute-0 podman[100515]: 2025-11-26 12:39:04.568149996 +0000 UTC m=+0.099816862 container attach 6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 12:39:04 compute-0 podman[100515]: 2025-11-26 12:39:04.489092283 +0000 UTC m=+0.020759169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:39:04 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 26 12:39:04 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 26 12:39:04 compute-0 ceph-mon[74966]: 4.1d scrub starts
Nov 26 12:39:04 compute-0 ceph-mon[74966]: 4.1d scrub ok
Nov 26 12:39:04 compute-0 ceph-mon[74966]: pgmap v71: 134 pgs: 1 creating+peering, 1 active+clean+scrubbing, 2 unknown, 130 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Nov 26 12:39:04 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2399447549' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 26 12:39:04 compute-0 ceph-mon[74966]: osdmap e37: 3 total, 3 up, 3 in
Nov 26 12:39:04 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 12:39:04 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/219168040' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 12:39:04 compute-0 practical_mccarthy[100477]: 
Nov 26 12:39:04 compute-0 practical_mccarthy[100477]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.cpfqrx","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 26 12:39:04 compute-0 systemd[1]: libpod-e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545.scope: Deactivated successfully.
Nov 26 12:39:04 compute-0 podman[100568]: 2025-11-26 12:39:04.766679036 +0000 UTC m=+0.016978574 container died e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545 (image=quay.io/ceph/ceph:v18, name=practical_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:39:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-df6f5d1da2f4dcd6ae084a199cc6176351390cadd2734807b5dac107bc7cd43f-merged.mount: Deactivated successfully.
Nov 26 12:39:04 compute-0 podman[100568]: 2025-11-26 12:39:04.787579339 +0000 UTC m=+0.037878868 container remove e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545 (image=quay.io/ceph/ceph:v18, name=practical_mccarthy, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 12:39:04 compute-0 systemd[1]: libpod-conmon-e9abb9650e3785d57c57aece37c5c20a8fbf2fac70ea094c608efa5732ddc545.scope: Deactivated successfully.
Nov 26 12:39:04 compute-0 sudo[100428]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:05 compute-0 angry_allen[100542]: {
Nov 26 12:39:05 compute-0 angry_allen[100542]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:39:05 compute-0 angry_allen[100542]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:39:05 compute-0 angry_allen[100542]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:39:05 compute-0 angry_allen[100542]:         "osd_id": 1,
Nov 26 12:39:05 compute-0 angry_allen[100542]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:39:05 compute-0 angry_allen[100542]:         "type": "bluestore"
Nov 26 12:39:05 compute-0 angry_allen[100542]:     },
Nov 26 12:39:05 compute-0 angry_allen[100542]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:39:05 compute-0 angry_allen[100542]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:39:05 compute-0 angry_allen[100542]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:39:05 compute-0 angry_allen[100542]:         "osd_id": 2,
Nov 26 12:39:05 compute-0 angry_allen[100542]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:39:05 compute-0 angry_allen[100542]:         "type": "bluestore"
Nov 26 12:39:05 compute-0 angry_allen[100542]:     },
Nov 26 12:39:05 compute-0 angry_allen[100542]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:39:05 compute-0 angry_allen[100542]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:39:05 compute-0 angry_allen[100542]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:39:05 compute-0 angry_allen[100542]:         "osd_id": 0,
Nov 26 12:39:05 compute-0 angry_allen[100542]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:39:05 compute-0 angry_allen[100542]:         "type": "bluestore"
Nov 26 12:39:05 compute-0 angry_allen[100542]:     }
Nov 26 12:39:05 compute-0 angry_allen[100542]: }
Nov 26 12:39:05 compute-0 systemd[1]: libpod-6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e.scope: Deactivated successfully.
Nov 26 12:39:05 compute-0 podman[100607]: 2025-11-26 12:39:05.379296796 +0000 UTC m=+0.017301884 container died 6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:39:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7e0f2721c0500418cde0276b1dda1e8b1a977a94309983a40be0e97c204d489-merged.mount: Deactivated successfully.
Nov 26 12:39:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 26 12:39:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 26 12:39:05 compute-0 podman[100607]: 2025-11-26 12:39:05.407277361 +0000 UTC m=+0.045282459 container remove 6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:39:05 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 26 12:39:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 26 12:39:05 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 26 12:39:05 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:05 compute-0 systemd[1]: libpod-conmon-6a8f746e08d0660c6b3afce3f7b369b534a4932d8c049c7bff051ffb390dbc5e.scope: Deactivated successfully.
Nov 26 12:39:05 compute-0 sudo[100380]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:39:05 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:39:05 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:05 compute-0 sudo[100642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptpasuzotijgkrqehcmxcnrkvetnmkjy ; /usr/bin/python3'
Nov 26 12:39:05 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 753f5091-b0da-4503-82c4-6a97d089162b does not exist
Nov 26 12:39:05 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev dd8122ff-bad0-48a8-991c-71dbe10fcaed does not exist
Nov 26 12:39:05 compute-0 sudo[100642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:39:05 compute-0 sudo[100645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:05 compute-0 sudo[100645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:05 compute-0 sudo[100645]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:05 compute-0 sudo[100670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:39:05 compute-0 sudo[100670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:05 compute-0 sudo[100670]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:05 compute-0 python3[100644]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:39:05 compute-0 sudo[100695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:05 compute-0 sudo[100695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:05 compute-0 sudo[100695]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:05 compute-0 podman[100719]: 2025-11-26 12:39:05.618683535 +0000 UTC m=+0.030645208 container create 7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60 (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:39:05 compute-0 sudo[100721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:39:05 compute-0 sudo[100721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:05 compute-0 sudo[100721]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:05 compute-0 systemd[1]: Started libpod-conmon-7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60.scope.
Nov 26 12:39:05 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1fa87f97aff1235f49a60c232b184beef08c8db3c15501390859f11a0986ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1fa87f97aff1235f49a60c232b184beef08c8db3c15501390859f11a0986ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:05 compute-0 podman[100719]: 2025-11-26 12:39:05.663194269 +0000 UTC m=+0.075155952 container init 7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60 (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:39:05 compute-0 podman[100719]: 2025-11-26 12:39:05.667658342 +0000 UTC m=+0.079620024 container start 7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60 (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:39:05 compute-0 podman[100719]: 2025-11-26 12:39:05.668863463 +0000 UTC m=+0.080825145 container attach 7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60 (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 26 12:39:05 compute-0 sudo[100757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:05 compute-0 sudo[100757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:05 compute-0 sudo[100757]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:05 compute-0 ceph-mds[99300]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 26 12:39:05 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mds-cephfs-compute-0-ipyiim[99296]: 2025-11-26T12:39:05.686+0000 7f641120d640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 26 12:39:05 compute-0 ceph-mon[74966]: 5.6 scrub starts
Nov 26 12:39:05 compute-0 ceph-mon[74966]: 5.6 scrub ok
Nov 26 12:39:05 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/219168040' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 12:39:05 compute-0 ceph-mon[74966]: osdmap e38: 3 total, 3 up, 3 in
Nov 26 12:39:05 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 26 12:39:05 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:05 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:05 compute-0 podman[100719]: 2025-11-26 12:39:05.605718565 +0000 UTC m=+0.017680247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:39:05 compute-0 sudo[100786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 12:39:05 compute-0 sudo[100786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:05 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v74: 135 pgs: 1 unknown, 1 creating+peering, 1 active+clean+scrubbing, 132 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s
Nov 26 12:39:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:39:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:39:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:39:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:39:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:39:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:39:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:39:06 compute-0 podman[100886]: 2025-11-26 12:39:06.065680147 +0000 UTC m=+0.036600469 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:39:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 26 12:39:06 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/607187607' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 26 12:39:06 compute-0 sleepy_clarke[100758]: mimic
Nov 26 12:39:06 compute-0 systemd[1]: libpod-7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60.scope: Deactivated successfully.
Nov 26 12:39:06 compute-0 podman[100719]: 2025-11-26 12:39:06.1222049 +0000 UTC m=+0.534166571 container died 7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60 (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:39:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb1fa87f97aff1235f49a60c232b184beef08c8db3c15501390859f11a0986ff-merged.mount: Deactivated successfully.
Nov 26 12:39:06 compute-0 podman[100719]: 2025-11-26 12:39:06.147635971 +0000 UTC m=+0.559597643 container remove 7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60 (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:39:06 compute-0 systemd[1]: libpod-conmon-7baf86dadac48501a29a3bf52a9e35c49c86f840404f676ff5847d2a22181b60.scope: Deactivated successfully.
Nov 26 12:39:06 compute-0 sudo[100642]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:06 compute-0 podman[100914]: 2025-11-26 12:39:06.200864612 +0000 UTC m=+0.046918272 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:39:06 compute-0 podman[100886]: 2025-11-26 12:39:06.202846868 +0000 UTC m=+0.173767180 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 12:39:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 26 12:39:06 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 26 12:39:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 26 12:39:06 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 26 12:39:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 26 12:39:06 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 26 12:39:06 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:06 compute-0 sudo[100786]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:39:06 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:39:06 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:39:06 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:39:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:39:06 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:39:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:39:06 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:06 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 9ba173af-d5e2-4452-a4cd-b9fe7207edbb does not exist
Nov 26 12:39:06 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 34cf9d9d-7229-4e52-84e5-34ba2ff0c250 does not exist
Nov 26 12:39:06 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev e1d9600a-fce1-421e-a23d-97c06b55f878 does not exist
Nov 26 12:39:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:39:06 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:39:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:39:06 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:39:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:39:06 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:39:06 compute-0 sudo[101030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:06 compute-0 sudo[101030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:06 compute-0 sudo[101030]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:06 compute-0 ceph-mon[74966]: pgmap v74: 135 pgs: 1 unknown, 1 creating+peering, 1 active+clean+scrubbing, 132 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s
Nov 26 12:39:06 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/607187607' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 26 12:39:06 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 26 12:39:06 compute-0 ceph-mon[74966]: osdmap e39: 3 total, 3 up, 3 in
Nov 26 12:39:06 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 26 12:39:06 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:06 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:06 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:39:06 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:39:06 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:06 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:39:06 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:39:06 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:39:06 compute-0 sudo[101055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:39:06 compute-0 sudo[101055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:06 compute-0 sudo[101055]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:06 compute-0 sudo[101080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:06 compute-0 sudo[101080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:06 compute-0 sudo[101080]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:06 compute-0 sudo[101105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:39:06 compute-0 sudo[101105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:06 compute-0 sudo[101153]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqmjwibrkdkdyodhrwtpuclxtlsnxydt ; /usr/bin/python3'
Nov 26 12:39:06 compute-0 sudo[101153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:39:06 compute-0 python3[101155]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:39:06 compute-0 podman[101175]: 2025-11-26 12:39:06.992569797 +0000 UTC m=+0.027108453 container create a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792 (image=quay.io/ceph/ceph:v18, name=festive_mclean, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:39:07 compute-0 systemd[1]: Started libpod-conmon-a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792.scope.
Nov 26 12:39:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998639a2a36c9e94feabfd164c87c5c88310bfa059ac3c04ff794d775adfcbce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998639a2a36c9e94feabfd164c87c5c88310bfa059ac3c04ff794d775adfcbce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:07 compute-0 podman[101175]: 2025-11-26 12:39:07.038876686 +0000 UTC m=+0.073415342 container init a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792 (image=quay.io/ceph/ceph:v18, name=festive_mclean, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:39:07 compute-0 podman[101175]: 2025-11-26 12:39:07.046809403 +0000 UTC m=+0.081348049 container start a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792 (image=quay.io/ceph/ceph:v18, name=festive_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 12:39:07 compute-0 podman[101175]: 2025-11-26 12:39:07.049814937 +0000 UTC m=+0.084353593 container attach a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792 (image=quay.io/ceph/ceph:v18, name=festive_mclean, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:39:07 compute-0 podman[101175]: 2025-11-26 12:39:06.981648016 +0000 UTC m=+0.016186672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:39:07 compute-0 podman[101203]: 2025-11-26 12:39:07.100677991 +0000 UTC m=+0.032798675 container create 4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:39:07 compute-0 systemd[1]: Started libpod-conmon-4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570.scope.
Nov 26 12:39:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:07 compute-0 podman[101203]: 2025-11-26 12:39:07.142457108 +0000 UTC m=+0.074577781 container init 4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kalam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:39:07 compute-0 podman[101203]: 2025-11-26 12:39:07.146362356 +0000 UTC m=+0.078483040 container start 4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kalam, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:39:07 compute-0 podman[101203]: 2025-11-26 12:39:07.147698976 +0000 UTC m=+0.079819680 container attach 4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kalam, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:39:07 compute-0 dreamy_kalam[101216]: 167 167
Nov 26 12:39:07 compute-0 systemd[1]: libpod-4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570.scope: Deactivated successfully.
Nov 26 12:39:07 compute-0 podman[101203]: 2025-11-26 12:39:07.149901435 +0000 UTC m=+0.082022119 container died 4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kalam, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:39:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9369c15448f6c10862603c5690114f9bac20fe13b401160f08d987945de5eb0-merged.mount: Deactivated successfully.
Nov 26 12:39:07 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Nov 26 12:39:07 compute-0 podman[101203]: 2025-11-26 12:39:07.172503135 +0000 UTC m=+0.104623819 container remove 4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:39:07 compute-0 podman[101203]: 2025-11-26 12:39:07.089703782 +0000 UTC m=+0.021824486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:39:07 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Nov 26 12:39:07 compute-0 systemd[1]: libpod-conmon-4d0bb7546970847f8765b48db3bfefe28cbe2f1a9f2da6639ea28fecbe500570.scope: Deactivated successfully.
Nov 26 12:39:07 compute-0 podman[101238]: 2025-11-26 12:39:07.286930958 +0000 UTC m=+0.028099110 container create f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_allen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:39:07 compute-0 systemd[1]: Started libpod-conmon-f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c.scope.
Nov 26 12:39:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a8a9587addb35c86d654d0d5c32b1ee6c591973e458aafe2d172f41e2e8892/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a8a9587addb35c86d654d0d5c32b1ee6c591973e458aafe2d172f41e2e8892/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a8a9587addb35c86d654d0d5c32b1ee6c591973e458aafe2d172f41e2e8892/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a8a9587addb35c86d654d0d5c32b1ee6c591973e458aafe2d172f41e2e8892/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a8a9587addb35c86d654d0d5c32b1ee6c591973e458aafe2d172f41e2e8892/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:07 compute-0 podman[101238]: 2025-11-26 12:39:07.348672781 +0000 UTC m=+0.089840943 container init f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:39:07 compute-0 podman[101238]: 2025-11-26 12:39:07.353635813 +0000 UTC m=+0.094803966 container start f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_allen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 12:39:07 compute-0 podman[101238]: 2025-11-26 12:39:07.35559241 +0000 UTC m=+0.096760582 container attach f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_allen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:39:07 compute-0 podman[101238]: 2025-11-26 12:39:07.274900889 +0000 UTC m=+0.016069061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:39:07 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 26 12:39:07 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 26 12:39:07 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 26 12:39:07 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 26 12:39:07 compute-0 radosgw[98869]: LDAP not started since no server URIs were provided in the configuration.
Nov 26 12:39:07 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-rgw-rgw-compute-0-cpfqrx[98865]: 2025-11-26T12:39:07.459+0000 7fdc13f5c940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 26 12:39:07 compute-0 radosgw[98869]: framework: beast
Nov 26 12:39:07 compute-0 radosgw[98869]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 26 12:39:07 compute-0 radosgw[98869]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 26 12:39:07 compute-0 radosgw[98869]: starting handler: beast
Nov 26 12:39:07 compute-0 radosgw[98869]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 12:39:07 compute-0 radosgw[98869]: mgrc service_daemon_register rgw.14273 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC 7763 64-Core Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.cpfqrx,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865364,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=bf23e527-2843-459f-8ad0-cbeb0777daef,zone_name=default,zonegroup_id=20a26587-2166-4189-a102-225650a14516,zonegroup_name=default}
Nov 26 12:39:07 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 26 12:39:07 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/807273787' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 26 12:39:07 compute-0 festive_mclean[101199]: 
Nov 26 12:39:07 compute-0 festive_mclean[101199]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Nov 26 12:39:07 compute-0 systemd[1]: libpod-a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792.scope: Deactivated successfully.
Nov 26 12:39:07 compute-0 podman[101175]: 2025-11-26 12:39:07.558093164 +0000 UTC m=+0.592631810 container died a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792 (image=quay.io/ceph/ceph:v18, name=festive_mclean, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:39:07 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 26 12:39:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-998639a2a36c9e94feabfd164c87c5c88310bfa059ac3c04ff794d775adfcbce-merged.mount: Deactivated successfully.
Nov 26 12:39:07 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 26 12:39:07 compute-0 podman[101175]: 2025-11-26 12:39:07.580713699 +0000 UTC m=+0.615252345 container remove a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792 (image=quay.io/ceph/ceph:v18, name=festive_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:39:07 compute-0 sudo[101153]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:07 compute-0 systemd[1]: libpod-conmon-a6b7ba86976899ab958595050282bb48b83e02ddc72f276273ef1f1ddef59792.scope: Deactivated successfully.
Nov 26 12:39:07 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 26 12:39:07 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 26 12:39:07 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v77: 135 pgs: 1 unknown, 1 creating+peering, 133 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s
Nov 26 12:39:08 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 26 12:39:08 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 26 12:39:08 compute-0 goofy_allen[101270]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:39:08 compute-0 goofy_allen[101270]: --> relative data size: 1.0
Nov 26 12:39:08 compute-0 goofy_allen[101270]: --> All data devices are unavailable
Nov 26 12:39:08 compute-0 systemd[1]: libpod-f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c.scope: Deactivated successfully.
Nov 26 12:39:08 compute-0 podman[101852]: 2025-11-26 12:39:08.260374642 +0000 UTC m=+0.016082827 container died f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 12:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-72a8a9587addb35c86d654d0d5c32b1ee6c591973e458aafe2d172f41e2e8892-merged.mount: Deactivated successfully.
Nov 26 12:39:08 compute-0 podman[101852]: 2025-11-26 12:39:08.289644537 +0000 UTC m=+0.045352712 container remove f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_allen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 12:39:08 compute-0 systemd[1]: libpod-conmon-f069e06cfdf27573683306b1d537258cdf0e2f506247ab98e3bb26420434964c.scope: Deactivated successfully.
Nov 26 12:39:08 compute-0 sudo[101105]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:08 compute-0 sudo[101864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:08 compute-0 sudo[101864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:08 compute-0 sudo[101864]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:08 compute-0 sudo[101889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:39:08 compute-0 sudo[101889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:08 compute-0 sudo[101889]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:08 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 26 12:39:08 compute-0 ceph-mon[74966]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 26 12:39:08 compute-0 ceph-mon[74966]: 3.1a scrub starts
Nov 26 12:39:08 compute-0 ceph-mon[74966]: 3.1a scrub ok
Nov 26 12:39:08 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2584753622' entity='client.rgw.rgw.compute-0.cpfqrx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 26 12:39:08 compute-0 ceph-mon[74966]: osdmap e40: 3 total, 3 up, 3 in
Nov 26 12:39:08 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/807273787' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 26 12:39:08 compute-0 ceph-mon[74966]: 5.8 scrub starts
Nov 26 12:39:08 compute-0 ceph-mon[74966]: 5.8 scrub ok
Nov 26 12:39:08 compute-0 ceph-mon[74966]: 4.1e scrub starts
Nov 26 12:39:08 compute-0 ceph-mon[74966]: 4.1e scrub ok
Nov 26 12:39:08 compute-0 ceph-mon[74966]: pgmap v77: 135 pgs: 1 unknown, 1 creating+peering, 133 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s
Nov 26 12:39:08 compute-0 sudo[101914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:08 compute-0 sudo[101914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:08 compute-0 sudo[101914]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:08 compute-0 sudo[101939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:39:08 compute-0 sudo[101939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:08 compute-0 podman[101994]: 2025-11-26 12:39:08.723169983 +0000 UTC m=+0.032647521 container create 66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:39:08 compute-0 systemd[1]: Started libpod-conmon-66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec.scope.
Nov 26 12:39:08 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:08 compute-0 podman[101994]: 2025-11-26 12:39:08.769882015 +0000 UTC m=+0.079359573 container init 66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:39:08 compute-0 podman[101994]: 2025-11-26 12:39:08.774460993 +0000 UTC m=+0.083938531 container start 66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 12:39:08 compute-0 podman[101994]: 2025-11-26 12:39:08.775683598 +0000 UTC m=+0.085161135 container attach 66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 12:39:08 compute-0 admiring_newton[102007]: 167 167
Nov 26 12:39:08 compute-0 systemd[1]: libpod-66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec.scope: Deactivated successfully.
Nov 26 12:39:08 compute-0 podman[101994]: 2025-11-26 12:39:08.777746285 +0000 UTC m=+0.087223822 container died 66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 12:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8147b9bdf171642259c9be3b06a26a9bb66f14e72a7b8396766826270b7fb5f-merged.mount: Deactivated successfully.
Nov 26 12:39:08 compute-0 podman[101994]: 2025-11-26 12:39:08.798594099 +0000 UTC m=+0.108071637 container remove 66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 12:39:08 compute-0 podman[101994]: 2025-11-26 12:39:08.710541988 +0000 UTC m=+0.020019555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:39:08 compute-0 systemd[1]: libpod-conmon-66cf840fac287b75a1d03f3316551b3ad9946a3c10b3e8789e2a07c0b7e1ecec.scope: Deactivated successfully.
Nov 26 12:39:08 compute-0 podman[102031]: 2025-11-26 12:39:08.914636063 +0000 UTC m=+0.028646512 container create 04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:39:08 compute-0 systemd[1]: Started libpod-conmon-04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509.scope.
Nov 26 12:39:08 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21fdc469917721c78bdf26a2d9e14d65b1153748658ce1f96d8c14765196c09e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21fdc469917721c78bdf26a2d9e14d65b1153748658ce1f96d8c14765196c09e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21fdc469917721c78bdf26a2d9e14d65b1153748658ce1f96d8c14765196c09e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21fdc469917721c78bdf26a2d9e14d65b1153748658ce1f96d8c14765196c09e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:08 compute-0 podman[102031]: 2025-11-26 12:39:08.967786829 +0000 UTC m=+0.081797307 container init 04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:39:08 compute-0 podman[102031]: 2025-11-26 12:39:08.9733701 +0000 UTC m=+0.087380548 container start 04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 12:39:08 compute-0 podman[102031]: 2025-11-26 12:39:08.974486503 +0000 UTC m=+0.088496952 container attach 04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 12:39:08 compute-0 podman[102031]: 2025-11-26 12:39:08.903180257 +0000 UTC m=+0.017190725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:39:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 26 12:39:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 26 12:39:09 compute-0 ceph-mon[74966]: 3.1c scrub starts
Nov 26 12:39:09 compute-0 ceph-mon[74966]: 3.1c scrub ok
Nov 26 12:39:09 compute-0 ceph-mon[74966]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 26 12:39:09 compute-0 ceph-mon[74966]: Cluster is now healthy
Nov 26 12:39:09 compute-0 competent_sanderson[102044]: {
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:     "0": [
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:         {
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "devices": [
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "/dev/loop3"
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             ],
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "lv_name": "ceph_lv0",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "lv_size": "21470642176",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "name": "ceph_lv0",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "tags": {
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.cluster_name": "ceph",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.crush_device_class": "",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.encrypted": "0",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.osd_id": "0",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.type": "block",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.vdo": "0"
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             },
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "type": "block",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "vg_name": "ceph_vg0"
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:         }
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:     ],
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:     "1": [
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:         {
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "devices": [
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "/dev/loop4"
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             ],
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "lv_name": "ceph_lv1",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "lv_size": "21470642176",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "name": "ceph_lv1",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "tags": {
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.cluster_name": "ceph",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.crush_device_class": "",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.encrypted": "0",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.osd_id": "1",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.type": "block",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.vdo": "0"
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             },
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "type": "block",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "vg_name": "ceph_vg1"
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:         }
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:     ],
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:     "2": [
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:         {
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "devices": [
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "/dev/loop5"
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             ],
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "lv_name": "ceph_lv2",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "lv_size": "21470642176",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "name": "ceph_lv2",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "tags": {
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.cluster_name": "ceph",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.crush_device_class": "",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.encrypted": "0",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.osd_id": "2",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.type": "block",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:                 "ceph.vdo": "0"
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             },
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "type": "block",
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:             "vg_name": "ceph_vg2"
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:         }
Nov 26 12:39:09 compute-0 competent_sanderson[102044]:     ]
Nov 26 12:39:09 compute-0 competent_sanderson[102044]: }
Nov 26 12:39:09 compute-0 systemd[1]: libpod-04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509.scope: Deactivated successfully.
Nov 26 12:39:09 compute-0 podman[102031]: 2025-11-26 12:39:09.605886926 +0000 UTC m=+0.719897375 container died 04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 12:39:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-21fdc469917721c78bdf26a2d9e14d65b1153748658ce1f96d8c14765196c09e-merged.mount: Deactivated successfully.
Nov 26 12:39:09 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 26 12:39:09 compute-0 podman[102031]: 2025-11-26 12:39:09.637141982 +0000 UTC m=+0.751152431 container remove 04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 12:39:09 compute-0 systemd[1]: libpod-conmon-04dd892334d211c5bac12bde5c99fb0872b4880b33587e72a8aa86bc5c5d8509.scope: Deactivated successfully.
Nov 26 12:39:09 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 26 12:39:09 compute-0 sudo[101939]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:09 compute-0 sudo[102062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:09 compute-0 sudo[102062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:09 compute-0 sudo[102062]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:09 compute-0 sudo[102087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:39:09 compute-0 sudo[102087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:09 compute-0 sudo[102087]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:09 compute-0 sudo[102112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:09 compute-0 sudo[102112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:09 compute-0 sudo[102112]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:09 compute-0 sudo[102137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:39:09 compute-0 sudo[102137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:09 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v78: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 7.0 KiB/s wr, 204 op/s
Nov 26 12:39:10 compute-0 podman[102193]: 2025-11-26 12:39:10.035575693 +0000 UTC m=+0.025536671 container create ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:39:10 compute-0 systemd[1]: Started libpod-conmon-ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4.scope.
Nov 26 12:39:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:10 compute-0 podman[102193]: 2025-11-26 12:39:10.075538887 +0000 UTC m=+0.065499875 container init ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 12:39:10 compute-0 podman[102193]: 2025-11-26 12:39:10.079840964 +0000 UTC m=+0.069801932 container start ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 12:39:10 compute-0 podman[102193]: 2025-11-26 12:39:10.080917112 +0000 UTC m=+0.070878080 container attach ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:39:10 compute-0 flamboyant_galileo[102206]: 167 167
Nov 26 12:39:10 compute-0 systemd[1]: libpod-ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4.scope: Deactivated successfully.
Nov 26 12:39:10 compute-0 conmon[102206]: conmon ced88ad3a1f3f3dfdfd5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4.scope/container/memory.events
Nov 26 12:39:10 compute-0 podman[102193]: 2025-11-26 12:39:10.08399407 +0000 UTC m=+0.073955039 container died ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 12:39:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eb5674a165bb980a1f94071837bddf5216f10643f0ef186624c5fe983d30ef4-merged.mount: Deactivated successfully.
Nov 26 12:39:10 compute-0 podman[102193]: 2025-11-26 12:39:10.101696388 +0000 UTC m=+0.091657356 container remove ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 26 12:39:10 compute-0 podman[102193]: 2025-11-26 12:39:10.025031494 +0000 UTC m=+0.014992482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:39:10 compute-0 systemd[1]: libpod-conmon-ced88ad3a1f3f3dfdfd54240e9b8cf0d97bebc66cdf44270d0bec8228b57b8c4.scope: Deactivated successfully.
Nov 26 12:39:10 compute-0 podman[102228]: 2025-11-26 12:39:10.208811991 +0000 UTC m=+0.025837619 container create 2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 12:39:10 compute-0 systemd[1]: Started libpod-conmon-2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2.scope.
Nov 26 12:39:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0cfb07f2951e6cd84d336ca2c8800edb22c915cb8b708c700fe5dcefb62e002/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0cfb07f2951e6cd84d336ca2c8800edb22c915cb8b708c700fe5dcefb62e002/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0cfb07f2951e6cd84d336ca2c8800edb22c915cb8b708c700fe5dcefb62e002/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0cfb07f2951e6cd84d336ca2c8800edb22c915cb8b708c700fe5dcefb62e002/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:10 compute-0 podman[102228]: 2025-11-26 12:39:10.26095078 +0000 UTC m=+0.077976417 container init 2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 12:39:10 compute-0 podman[102228]: 2025-11-26 12:39:10.265571647 +0000 UTC m=+0.082597264 container start 2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 12:39:10 compute-0 podman[102228]: 2025-11-26 12:39:10.266851569 +0000 UTC m=+0.083877196 container attach 2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:39:10 compute-0 podman[102228]: 2025-11-26 12:39:10.198537742 +0000 UTC m=+0.015563389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:39:10 compute-0 ceph-mon[74966]: 2.1b scrub starts
Nov 26 12:39:10 compute-0 ceph-mon[74966]: 2.1b scrub ok
Nov 26 12:39:10 compute-0 ceph-mon[74966]: 4.1f scrub starts
Nov 26 12:39:10 compute-0 ceph-mon[74966]: 4.1f scrub ok
Nov 26 12:39:10 compute-0 ceph-mon[74966]: pgmap v78: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 7.0 KiB/s wr, 204 op/s
Nov 26 12:39:10 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 26 12:39:10 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 26 12:39:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]: {
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:         "osd_id": 1,
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:         "type": "bluestore"
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:     },
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:         "osd_id": 2,
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:         "type": "bluestore"
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:     },
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:         "osd_id": 0,
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:         "type": "bluestore"
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]:     }
Nov 26 12:39:11 compute-0 admiring_dubinsky[102241]: }
Nov 26 12:39:11 compute-0 systemd[1]: libpod-2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2.scope: Deactivated successfully.
Nov 26 12:39:11 compute-0 podman[102228]: 2025-11-26 12:39:11.039628065 +0000 UTC m=+0.856653722 container died 2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:39:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0cfb07f2951e6cd84d336ca2c8800edb22c915cb8b708c700fe5dcefb62e002-merged.mount: Deactivated successfully.
Nov 26 12:39:11 compute-0 podman[102228]: 2025-11-26 12:39:11.070316452 +0000 UTC m=+0.887342079 container remove 2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:39:11 compute-0 systemd[1]: libpod-conmon-2bba75926e00b147f46f3a4e0440f7d791860c6c2edf698d52a500e8a12cd5c2.scope: Deactivated successfully.
Nov 26 12:39:11 compute-0 sudo[102137]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:39:11 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:39:11 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:11 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 2a080a50-abb6-4a13-b871-f6322681eddf does not exist
Nov 26 12:39:11 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev efdb0c01-1ef0-4add-ba55-274f2b31e62a does not exist
Nov 26 12:39:11 compute-0 sudo[102284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:39:11 compute-0 sudo[102284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:11 compute-0 sudo[102284]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:11 compute-0 sudo[102309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:39:11 compute-0 sudo[102309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:39:11 compute-0 sudo[102309]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:11 compute-0 ceph-mon[74966]: 5.a scrub starts
Nov 26 12:39:11 compute-0 ceph-mon[74966]: 5.a scrub ok
Nov 26 12:39:11 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:11 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:11 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 26 12:39:11 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 26 12:39:11 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v79: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 5.0 KiB/s wr, 170 op/s
Nov 26 12:39:12 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 26 12:39:12 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 26 12:39:12 compute-0 ceph-mon[74966]: 5.b scrub starts
Nov 26 12:39:12 compute-0 ceph-mon[74966]: 5.b scrub ok
Nov 26 12:39:12 compute-0 ceph-mon[74966]: pgmap v79: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 5.0 KiB/s wr, 170 op/s
Nov 26 12:39:13 compute-0 ceph-mon[74966]: 2.17 scrub starts
Nov 26 12:39:13 compute-0 ceph-mon[74966]: 2.17 scrub ok
Nov 26 12:39:13 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 26 12:39:13 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 26 12:39:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.1b deep-scrub starts
Nov 26 12:39:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.1b deep-scrub ok
Nov 26 12:39:13 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v80: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.0 KiB/s wr, 137 op/s
Nov 26 12:39:14 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 26 12:39:14 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 26 12:39:14 compute-0 ceph-mon[74966]: 5.d scrub starts
Nov 26 12:39:14 compute-0 ceph-mon[74966]: 5.d scrub ok
Nov 26 12:39:14 compute-0 ceph-mon[74966]: 3.1b deep-scrub starts
Nov 26 12:39:14 compute-0 ceph-mon[74966]: 3.1b deep-scrub ok
Nov 26 12:39:14 compute-0 ceph-mon[74966]: pgmap v80: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.0 KiB/s wr, 137 op/s
Nov 26 12:39:15 compute-0 ceph-mon[74966]: 5.13 scrub starts
Nov 26 12:39:15 compute-0 ceph-mon[74966]: 5.13 scrub ok
Nov 26 12:39:15 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v81: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 3.4 KiB/s wr, 116 op/s
Nov 26 12:39:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:39:16 compute-0 ceph-mon[74966]: pgmap v81: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 3.4 KiB/s wr, 116 op/s
Nov 26 12:39:16 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 26 12:39:16 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 26 12:39:16 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 26 12:39:16 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 26 12:39:17 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 26 12:39:17 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 26 12:39:17 compute-0 ceph-mon[74966]: 5.e scrub starts
Nov 26 12:39:17 compute-0 ceph-mon[74966]: 5.e scrub ok
Nov 26 12:39:17 compute-0 ceph-mon[74966]: 3.1f scrub starts
Nov 26 12:39:17 compute-0 ceph-mon[74966]: 3.1f scrub ok
Nov 26 12:39:17 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v82: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.1 KiB/s wr, 105 op/s
Nov 26 12:39:18 compute-0 ceph-mon[74966]: 5.11 scrub starts
Nov 26 12:39:18 compute-0 ceph-mon[74966]: 5.11 scrub ok
Nov 26 12:39:18 compute-0 ceph-mon[74966]: pgmap v82: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.1 KiB/s wr, 105 op/s
Nov 26 12:39:18 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 26 12:39:18 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 26 12:39:19 compute-0 ceph-mon[74966]: 3.a scrub starts
Nov 26 12:39:19 compute-0 ceph-mon[74966]: 3.a scrub ok
Nov 26 12:39:19 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 26 12:39:19 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 26 12:39:19 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v83: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 91 op/s
Nov 26 12:39:20 compute-0 ceph-mon[74966]: 5.10 scrub starts
Nov 26 12:39:20 compute-0 ceph-mon[74966]: 5.10 scrub ok
Nov 26 12:39:20 compute-0 ceph-mon[74966]: pgmap v83: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 91 op/s
Nov 26 12:39:20 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 26 12:39:20 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 26 12:39:20 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 26 12:39:20 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 26 12:39:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:39:21 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 26 12:39:21 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 26 12:39:21 compute-0 ceph-mon[74966]: 5.17 scrub starts
Nov 26 12:39:21 compute-0 ceph-mon[74966]: 5.17 scrub ok
Nov 26 12:39:21 compute-0 ceph-mon[74966]: 3.9 scrub starts
Nov 26 12:39:21 compute-0 ceph-mon[74966]: 3.9 scrub ok
Nov 26 12:39:21 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v84: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:22 compute-0 ceph-mon[74966]: 2.15 scrub starts
Nov 26 12:39:22 compute-0 ceph-mon[74966]: 2.15 scrub ok
Nov 26 12:39:22 compute-0 ceph-mon[74966]: pgmap v84: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:23 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v85: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:24 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 26 12:39:24 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 26 12:39:24 compute-0 ceph-mon[74966]: pgmap v85: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:25 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 26 12:39:25 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 26 12:39:25 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v86: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:25 compute-0 ceph-mon[74966]: 5.12 scrub starts
Nov 26 12:39:25 compute-0 ceph-mon[74966]: 5.12 scrub ok
Nov 26 12:39:25 compute-0 ceph-mon[74966]: 5.1b scrub starts
Nov 26 12:39:25 compute-0 ceph-mon[74966]: 5.1b scrub ok
Nov 26 12:39:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:39:26 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Nov 26 12:39:26 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Nov 26 12:39:26 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 26 12:39:26 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 26 12:39:26 compute-0 ceph-mon[74966]: pgmap v86: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:26 compute-0 ceph-mon[74966]: 5.1c deep-scrub starts
Nov 26 12:39:26 compute-0 ceph-mon[74966]: 5.1c deep-scrub ok
Nov 26 12:39:26 compute-0 ceph-mon[74966]: 3.6 scrub starts
Nov 26 12:39:26 compute-0 ceph-mon[74966]: 3.6 scrub ok
Nov 26 12:39:27 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 26 12:39:27 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 26 12:39:27 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v87: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:27 compute-0 ceph-mon[74966]: 5.1f scrub starts
Nov 26 12:39:27 compute-0 ceph-mon[74966]: 5.1f scrub ok
Nov 26 12:39:28 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 26 12:39:28 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 26 12:39:28 compute-0 ceph-mon[74966]: pgmap v87: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:28 compute-0 ceph-mon[74966]: 3.f scrub starts
Nov 26 12:39:28 compute-0 ceph-mon[74966]: 3.f scrub ok
Nov 26 12:39:29 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 26 12:39:29 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 26 12:39:29 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v88: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:29 compute-0 ceph-mon[74966]: 3.12 scrub starts
Nov 26 12:39:29 compute-0 ceph-mon[74966]: 3.12 scrub ok
Nov 26 12:39:30 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 26 12:39:30 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 26 12:39:30 compute-0 ceph-mon[74966]: pgmap v88: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:30 compute-0 ceph-mon[74966]: 3.3 scrub starts
Nov 26 12:39:30 compute-0 ceph-mon[74966]: 3.3 scrub ok
Nov 26 12:39:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:39:31 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v89: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:32 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 26 12:39:32 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 26 12:39:32 compute-0 ceph-mon[74966]: pgmap v89: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:32 compute-0 ceph-mon[74966]: 3.1e scrub starts
Nov 26 12:39:32 compute-0 ceph-mon[74966]: 3.1e scrub ok
Nov 26 12:39:33 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v90: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:34 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 26 12:39:34 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 26 12:39:34 compute-0 ceph-mon[74966]: pgmap v90: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:39:35
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'vms']
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v91: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:39:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:39:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:39:35 compute-0 ceph-mon[74966]: 2.d scrub starts
Nov 26 12:39:35 compute-0 ceph-mon[74966]: 2.d scrub ok
Nov 26 12:39:36 compute-0 ceph-mon[74966]: pgmap v91: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:37 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v92: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:38 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 26 12:39:38 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 26 12:39:38 compute-0 ceph-mon[74966]: pgmap v92: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:38 compute-0 ceph-mon[74966]: 3.1d scrub starts
Nov 26 12:39:38 compute-0 ceph-mon[74966]: 3.1d scrub ok
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 12:39:39 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 26 12:39:39 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 26 12:39:39 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 26 12:39:39 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v93: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:39 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 26 12:39:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 26 12:39:39 compute-0 ceph-mon[74966]: 3.c scrub starts
Nov 26 12:39:39 compute-0 ceph-mon[74966]: 3.c scrub ok
Nov 26 12:39:39 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 26 12:39:39 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 26 12:39:39 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 26 12:39:39 compute-0 ceph-mgr[75236]: [progress INFO root] update: starting ev 36de624a-9d50-44b1-bd23-2697e369fb1b (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 26 12:39:39 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 12:39:39 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:39:40 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 26 12:39:40 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 26 12:39:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:39:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 26 12:39:40 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:39:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 26 12:39:40 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 26 12:39:40 compute-0 ceph-mgr[75236]: [progress INFO root] update: starting ev 97381620-0fc6-4cf2-8054-02242571a1cf (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 26 12:39:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 12:39:40 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:39:40 compute-0 ceph-mon[74966]: pgmap v93: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:40 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 26 12:39:40 compute-0 ceph-mon[74966]: osdmap e41: 3 total, 3 up, 3 in
Nov 26 12:39:40 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:39:41 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v96: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 12:39:41 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 26 12:39:41 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 26 12:39:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 26 12:39:41 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:39:41 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:39:41 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 26 12:39:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 26 12:39:41 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 26 12:39:41 compute-0 ceph-mon[74966]: 5.9 scrub starts
Nov 26 12:39:41 compute-0 ceph-mgr[75236]: [progress INFO root] update: starting ev 851076a2-72bd-4bf1-9f40-983640aeedea (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 26 12:39:41 compute-0 ceph-mon[74966]: 5.9 scrub ok
Nov 26 12:39:41 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:39:41 compute-0 ceph-mon[74966]: osdmap e42: 3 total, 3 up, 3 in
Nov 26 12:39:41 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:39:41 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:41 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 26 12:39:41 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:39:41 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:39:41 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 26 12:39:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 12:39:41 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:39:42 compute-0 sudo[102357]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npvmsrtuqpsmjxlmpxoojwotrkjcviuh ; /usr/bin/python3'
Nov 26 12:39:42 compute-0 sudo[102357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:39:42 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 26 12:39:42 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 26 12:39:42 compute-0 python3[102359]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:39:42 compute-0 podman[102360]: 2025-11-26 12:39:42.423016588 +0000 UTC m=+0.027848530 container create 1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956 (image=quay.io/ceph/ceph:v18, name=jolly_nash, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 12:39:42 compute-0 systemd[76457]: Starting Mark boot as successful...
Nov 26 12:39:42 compute-0 systemd[76457]: Finished Mark boot as successful.
Nov 26 12:39:42 compute-0 systemd[1]: Started libpod-conmon-1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956.scope.
Nov 26 12:39:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67d27b75acf8122f491357cec77474032d89d931528d2bcdd57796859cbe957/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67d27b75acf8122f491357cec77474032d89d931528d2bcdd57796859cbe957/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:42 compute-0 podman[102360]: 2025-11-26 12:39:42.463935635 +0000 UTC m=+0.068767577 container init 1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956 (image=quay.io/ceph/ceph:v18, name=jolly_nash, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 12:39:42 compute-0 podman[102360]: 2025-11-26 12:39:42.468713231 +0000 UTC m=+0.073545173 container start 1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956 (image=quay.io/ceph/ceph:v18, name=jolly_nash, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 12:39:42 compute-0 podman[102360]: 2025-11-26 12:39:42.469931598 +0000 UTC m=+0.074763540 container attach 1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956 (image=quay.io/ceph/ceph:v18, name=jolly_nash, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:39:42 compute-0 podman[102360]: 2025-11-26 12:39:42.41150944 +0000 UTC m=+0.016341402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=43 pruub=15.745674133s) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active pruub 98.196243286s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=43 pruub=15.745674133s) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown pruub 98.196243286s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 jolly_nash[102373]: could not fetch user info: no user info saved
Nov 26 12:39:42 compute-0 systemd[1]: libpod-1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956.scope: Deactivated successfully.
Nov 26 12:39:42 compute-0 podman[102458]: 2025-11-26 12:39:42.602191903 +0000 UTC m=+0.016183272 container died 1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956 (image=quay.io/ceph/ceph:v18, name=jolly_nash, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 12:39:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b67d27b75acf8122f491357cec77474032d89d931528d2bcdd57796859cbe957-merged.mount: Deactivated successfully.
Nov 26 12:39:42 compute-0 podman[102458]: 2025-11-26 12:39:42.623277828 +0000 UTC m=+0.037269208 container remove 1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956 (image=quay.io/ceph/ceph:v18, name=jolly_nash, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:39:42 compute-0 systemd[1]: libpod-conmon-1a374b14fa5e20bbad0c30fe2d68fb1680f5134aa726ea75689d6ab38efe8956.scope: Deactivated successfully.
Nov 26 12:39:42 compute-0 sudo[102357]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 43 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=21/22 n=22 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=43 pruub=14.548981667s) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 33'38 mlcod 33'38 active pruub 100.505180359s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 43 pg[6.0( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=43 pruub=14.548981667s) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 33'38 mlcod 0'0 unknown pruub 100.505180359s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 sudo[102493]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mydhouiuhbolsgdqrcawebvliqylffwe ; /usr/bin/python3'
Nov 26 12:39:42 compute-0 sudo[102493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:39:42 compute-0 python3[102495]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:39:42 compute-0 podman[102496]: 2025-11-26 12:39:42.901322013 +0000 UTC m=+0.030112596 container create 156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe (image=quay.io/ceph/ceph:v18, name=strange_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:39:42 compute-0 systemd[1]: Started libpod-conmon-156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe.scope.
Nov 26 12:39:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db3a483c340c786e50e4a217f5aa0fdfc4b472053744572ecc1af3bf85862d6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db3a483c340c786e50e4a217f5aa0fdfc4b472053744572ecc1af3bf85862d6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:39:42 compute-0 podman[102496]: 2025-11-26 12:39:42.95810606 +0000 UTC m=+0.086896663 container init 156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe (image=quay.io/ceph/ceph:v18, name=strange_ardinghelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:39:42 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 26 12:39:42 compute-0 podman[102496]: 2025-11-26 12:39:42.96241593 +0000 UTC m=+0.091206513 container start 156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe (image=quay.io/ceph/ceph:v18, name=strange_ardinghelli, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:39:42 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:39:42 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 26 12:39:42 compute-0 podman[102496]: 2025-11-26 12:39:42.964724301 +0000 UTC m=+0.093514884 container attach 156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe (image=quay.io/ceph/ceph:v18, name=strange_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:39:42 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=21/22 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 33'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-mgr[75236]: [progress INFO root] update: starting ev 36e79091-7762-417b-919b-09dffa4a735f (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-mon[74966]: pgmap v96: 135 pgs: 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:42 compute-0 ceph-mon[74966]: osdmap e43: 3 total, 3 up, 3 in
Nov 26 12:39:42 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 44 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=21/21 les/c/f=22/22/0 sis=43) [0] r=0 lpr=43 pi=[21,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=22/23 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=43/44 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 12:39:42 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=22/22 les/c/f=23/23/0 sis=43) [1] r=0 lpr=43 pi=[22,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:42 compute-0 podman[102496]: 2025-11-26 12:39:42.889149616 +0000 UTC m=+0.017940219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]: {
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "user_id": "openstack",
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "display_name": "openstack",
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "email": "",
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "suspended": 0,
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "max_buckets": 1000,
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "subusers": [],
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "keys": [
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:         {
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:             "user": "openstack",
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:             "access_key": "WYI892EBCV9E5ADCPWUD",
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:             "secret_key": "DFGAh7AOOJhuNdPraF6w21clnbHt7zF6Fw4Ep8TV"
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:         }
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     ],
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "swift_keys": [],
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "caps": [],
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "op_mask": "read, write, delete",
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "default_placement": "",
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "default_storage_class": "",
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "placement_tags": [],
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "bucket_quota": {
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:         "enabled": false,
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:         "check_on_raw": false,
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:         "max_size": -1,
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:         "max_size_kb": 0,
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:         "max_objects": -1
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     },
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "user_quota": {
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:         "enabled": false,
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:         "check_on_raw": false,
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:         "max_size": -1,
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:         "max_size_kb": 0,
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:         "max_objects": -1
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     },
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "temp_url_keys": [],
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "type": "rgw",
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]:     "mfa_ids": []
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]: }
Nov 26 12:39:43 compute-0 strange_ardinghelli[102508]: 
Nov 26 12:39:43 compute-0 systemd[1]: libpod-156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe.scope: Deactivated successfully.
Nov 26 12:39:43 compute-0 conmon[102508]: conmon 156df5fa25748864d732 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe.scope/container/memory.events
Nov 26 12:39:43 compute-0 podman[102593]: 2025-11-26 12:39:43.10415895 +0000 UTC m=+0.015921848 container died 156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe (image=quay.io/ceph/ceph:v18, name=strange_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 12:39:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3db3a483c340c786e50e4a217f5aa0fdfc4b472053744572ecc1af3bf85862d6-merged.mount: Deactivated successfully.
Nov 26 12:39:43 compute-0 podman[102593]: 2025-11-26 12:39:43.121315394 +0000 UTC m=+0.033078283 container remove 156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe (image=quay.io/ceph/ceph:v18, name=strange_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:39:43 compute-0 systemd[1]: libpod-conmon-156df5fa25748864d732d9f3bb622fa9c228939a7c25ca9aa2befbea4bb860fe.scope: Deactivated successfully.
Nov 26 12:39:43 compute-0 sudo[102493]: pam_unix(sudo:session): session closed for user root
Nov 26 12:39:43 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 26 12:39:43 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 26 12:39:43 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 26 12:39:43 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 26 12:39:43 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v99: 181 pgs: 46 unknown, 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 12:39:43 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 12:39:43 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 26 12:39:43 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:39:43 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:39:43 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:39:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 26 12:39:43 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 26 12:39:43 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 45 pg[9.0( v 44'389 (0'0,44'389] local-lis/les=34/35 n=177 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=14.435843468s) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 44'388 mlcod 44'388 active pruub 98.346565247s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:43 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 45 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=32/33 n=4 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=45 pruub=12.434915543s) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 33'3 mlcod 33'3 active pruub 96.345756531s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:43 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 45 pg[8.0( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=45 pruub=12.434915543s) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 33'3 mlcod 0'0 unknown pruub 96.345756531s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:43 compute-0 ceph-mon[74966]: 5.16 scrub starts
Nov 26 12:39:43 compute-0 ceph-mon[74966]: 5.16 scrub ok
Nov 26 12:39:43 compute-0 ceph-mgr[75236]: [progress INFO root] update: starting ev c680786d-647d-4d93-ac3b-06d054393d01 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 26 12:39:43 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:39:43 compute-0 ceph-mon[74966]: osdmap e44: 3 total, 3 up, 3 in
Nov 26 12:39:43 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:39:43 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:43 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:43 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 45 pg[9.0( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=14.435843468s) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 44'388 mlcod 0'0 unknown pruub 98.346565247s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 12:39:43 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:39:44 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 26 12:39:44 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 26 12:39:44 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 26 12:39:44 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 26 12:39:44 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 26 12:39:44 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:39:44 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 26 12:39:44 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.15( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.14( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.17( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.11( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-mgr[75236]: [progress INFO root] update: starting ev 80941aeb-ad6f-4e3b-99fd-e43e8972ed31 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 26 12:39:44 compute-0 ceph-mon[74966]: 2.4 scrub starts
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.3( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-mon[74966]: 2.4 scrub ok
Nov 26 12:39:44 compute-0 ceph-mon[74966]: 3.17 scrub starts
Nov 26 12:39:44 compute-0 ceph-mon[74966]: 3.17 scrub ok
Nov 26 12:39:44 compute-0 ceph-mon[74966]: pgmap v99: 181 pgs: 46 unknown, 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:44 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:39:44 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:39:44 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:39:44 compute-0 ceph-mon[74966]: osdmap e45: 3 total, 3 up, 3 in
Nov 26 12:39:44 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 12:39:44 compute-0 ceph-mon[74966]: 3.7 scrub starts
Nov 26 12:39:44 compute-0 ceph-mon[74966]: 3.7 scrub ok
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=32/33 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-mgr[75236]: [progress INFO root] complete: finished ev 36de624a-9d50-44b1-bd23-2697e369fb1b (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 26 12:39:44 compute-0 ceph-mgr[75236]: [progress INFO root] Completed event 36de624a-9d50-44b1-bd23-2697e369fb1b (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Nov 26 12:39:44 compute-0 ceph-mgr[75236]: [progress INFO root] complete: finished ev 97381620-0fc6-4cf2-8054-02242571a1cf (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 26 12:39:44 compute-0 ceph-mgr[75236]: [progress INFO root] Completed event 97381620-0fc6-4cf2-8054-02242571a1cf (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.2( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-mgr[75236]: [progress INFO root] complete: finished ev 851076a2-72bd-4bf1-9f40-983640aeedea (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 26 12:39:44 compute-0 ceph-mgr[75236]: [progress INFO root] Completed event 851076a2-72bd-4bf1-9f40-983640aeedea (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-mgr[75236]: [progress INFO root] complete: finished ev 36e79091-7762-417b-919b-09dffa4a735f (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 26 12:39:44 compute-0 ceph-mgr[75236]: [progress INFO root] Completed event 36e79091-7762-417b-919b-09dffa4a735f (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 26 12:39:44 compute-0 ceph-mgr[75236]: [progress INFO root] complete: finished ev c680786d-647d-4d93-ac3b-06d054393d01 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 26 12:39:44 compute-0 ceph-mgr[75236]: [progress INFO root] Completed event c680786d-647d-4d93-ac3b-06d054393d01 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 26 12:39:44 compute-0 ceph-mgr[75236]: [progress INFO root] complete: finished ev 80941aeb-ad6f-4e3b-99fd-e43e8972ed31 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 26 12:39:44 compute-0 ceph-mgr[75236]: [progress INFO root] Completed event 80941aeb-ad6f-4e3b-99fd-e43e8972ed31 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.8( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.9( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.3( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.a( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.8( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.7( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.4( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.5( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1a( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.6( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.10( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.18( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.19( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.12( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=34/35 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.14( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.17( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.0( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 44'388 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.2( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.8( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.a( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 33'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.3( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.13( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.16( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.5( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1a( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.4( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.10( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.19( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.7( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.12( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.16( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[8.13( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=32/32 les/c/f=33/33/0 sis=45) [1] r=0 lpr=45 pi=[32,45)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:44 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 46 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [1] r=0 lpr=45 pi=[34,45)/1 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:45 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 26 12:39:45 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 26 12:39:45 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 26 12:39:45 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 26 12:39:45 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v102: 243 pgs: 2 peering, 77 unknown, 164 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 511 B/s wr, 3 op/s
Nov 26 12:39:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 12:39:45 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 12:39:45 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:45 compute-0 ceph-mgr[75236]: [progress INFO root] Writing back 15 completed events
Nov 26 12:39:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 12:39:45 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:39:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 26 12:39:45 compute-0 ceph-mon[74966]: 2.a scrub starts
Nov 26 12:39:45 compute-0 ceph-mon[74966]: 2.a scrub ok
Nov 26 12:39:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 26 12:39:45 compute-0 ceph-mon[74966]: osdmap e46: 3 total, 3 up, 3 in
Nov 26 12:39:45 compute-0 ceph-mon[74966]: 3.5 scrub starts
Nov 26 12:39:45 compute-0 ceph-mon[74966]: 3.5 scrub ok
Nov 26 12:39:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:39:45 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:39:45 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:39:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 26 12:39:45 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 26 12:39:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 47 pg[11.0( v 44'2 (0'0,44'2] local-lis/les=38/39 n=2 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=47 pruub=8.427951813s) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 44'1 mlcod 44'1 active pruub 94.355407715s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:45 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 47 pg[11.0( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=47 pruub=8.427951813s) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 44'1 mlcod 0'0 unknown pruub 94.355407715s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 26 12:39:46 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 26 12:39:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 26 12:39:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 26 12:39:46 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 26 12:39:46 compute-0 ceph-mon[74966]: 3.15 scrub starts
Nov 26 12:39:46 compute-0 ceph-mon[74966]: 3.15 scrub ok
Nov 26 12:39:46 compute-0 ceph-mon[74966]: pgmap v102: 243 pgs: 2 peering, 77 unknown, 164 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 511 B/s wr, 3 op/s
Nov 26 12:39:46 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:39:46 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 12:39:46 compute-0 ceph-mon[74966]: osdmap e47: 3 total, 3 up, 3 in
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.16( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.13( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=1 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.5( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.7( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=38/39 n=1 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.16( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.13( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=38/39 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.7( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.0( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 44'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:46 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:47 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.5( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:47 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:47 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:47 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:47 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:47 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:47 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:47 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 48 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=38/38 les/c/f=39/39/0 sis=47) [1] r=0 lpr=47 pi=[38,47)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 44'63 active pruub 96.626022339s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:47 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 26 12:39:47 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 26 12:39:47 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v105: 305 pgs: 2 peering, 124 unknown, 179 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 511 B/s wr, 3 op/s
Nov 26 12:39:47 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 26 12:39:47 compute-0 ceph-mon[74966]: 2.11 scrub starts
Nov 26 12:39:47 compute-0 ceph-mon[74966]: 2.11 scrub ok
Nov 26 12:39:47 compute-0 ceph-mon[74966]: osdmap e48: 3 total, 3 up, 3 in
Nov 26 12:39:47 compute-0 ceph-mon[74966]: 3.8 scrub starts
Nov 26 12:39:47 compute-0 ceph-mon[74966]: 3.8 scrub ok
Nov 26 12:39:48 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 26 12:39:48 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:48 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 26 12:39:48 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 26 12:39:49 compute-0 ceph-mon[74966]: pgmap v105: 305 pgs: 2 peering, 124 unknown, 179 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 511 B/s wr, 3 op/s
Nov 26 12:39:49 compute-0 ceph-mon[74966]: osdmap e49: 3 total, 3 up, 3 in
Nov 26 12:39:49 compute-0 ceph-mon[74966]: 3.16 scrub starts
Nov 26 12:39:49 compute-0 ceph-mon[74966]: 3.16 scrub ok
Nov 26 12:39:49 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 26 12:39:49 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 26 12:39:49 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v107: 305 pgs: 305 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 12:39:49 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 12:39:49 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 26 12:39:49 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 26 12:39:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 12:39:49 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 26 12:39:49 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 26 12:39:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 12:39:49 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 26 12:39:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:39:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:39:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 26 12:39:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:39:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 26 12:39:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:39:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.956858635s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.221313477s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957725525s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.222213745s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.956802368s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.221313477s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957687378s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.222213745s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957573891s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.222312927s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957553864s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.222312927s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957663536s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.222503662s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957644463s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.222503662s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957647324s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.222511292s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957631111s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.222511292s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957588196s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.222564697s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.957574844s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.222564697s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.958526611s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.223617554s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.958513260s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.223617554s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.958328247s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 102.223571777s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.958296776s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.223571777s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:50 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:50 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 26 12:39:50 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:50 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 26 12:39:50 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.953216553s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.911293030s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.953197479s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.911293030s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.966938019s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925094604s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.966925621s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925094604s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.966979980s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.925231934s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.966959000s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925231934s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.952880859s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.911300659s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.952866554s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.911300659s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.966715813s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925521851s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.966698647s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925521851s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.972962379s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.931999207s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.972946167s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.931999207s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.950360298s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909515381s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.950347900s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909515381s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965970039s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.925201416s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965958595s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925201416s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.950175285s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909484863s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.950165749s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909484863s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965841293s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925209045s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965830803s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925209045s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.972534180s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.931983948s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.972522736s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.931983948s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.949930191s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909461975s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.949919701s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909461975s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.973234177s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932838440s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.973223686s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932838440s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965620041s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925300598s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965609550s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925300598s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965567589s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.925315857s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965557098s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925315857s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.949582100s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909431458s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.949571609s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909431458s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.972041130s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932014465s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.972028732s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932014465s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.951236725s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.911300659s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.951225281s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.911300659s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965163231s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925292969s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965151787s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925292969s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965291977s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.925491333s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.965282440s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925491333s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.971770287s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932029724s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.971759796s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932029724s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.948816299s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909446716s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.948799133s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909446716s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.964583397s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925338745s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.964568138s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925338745s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.971268654s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932113647s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.971258163s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932113647s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.948448181s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909423828s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.948434830s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909423828s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.964313507s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925346375s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.964303970s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925346375s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970659256s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932006836s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970640182s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932006836s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970690727s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932128906s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970678329s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932128906s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963773727s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.925544739s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963752747s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925544739s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963482857s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.925445557s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963466644s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925445557s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963430405s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925460815s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963411331s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925460815s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970579147s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932723999s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970562935s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932723999s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963310242s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925514221s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963294983s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925514221s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.947049141s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909370422s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963112831s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925529480s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.963096619s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925529480s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970295906s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932785034s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.970280647s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932785034s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.947036743s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909370422s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946689606s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909362793s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946668625s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909378052s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946654320s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909378052s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962790489s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.925582886s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962779999s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925582886s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.969814301s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932762146s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.969794273s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932762146s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946678162s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909362793s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946285248s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909416199s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946269989s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909416199s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946118355s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909347534s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.946105003s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909347534s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962831497s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926254272s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962813377s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926254272s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.969324112s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932777405s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.969307899s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932777405s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.945761681s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909332275s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.945750237s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909332275s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962580681s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926269531s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962568283s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926269531s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.964559555s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.928268433s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.969063759s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932830811s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.969052315s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932830811s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.964543343s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.928268433s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962424278s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926292419s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962412834s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926292419s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968824387s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932807922s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968811989s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932807922s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962259293s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926315308s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962218285s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926307678s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962239265s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926315308s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962206841s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926307678s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968664169s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932815552s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968652725s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932815552s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962080956s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926338196s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962067604s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926338196s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.961989403s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926330566s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.961977959s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926330566s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968335152s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932861328s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968203545s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932861328s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.961240768s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926361084s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.961174965s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926361084s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.967593193s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.932884216s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.967578888s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.932884216s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960934639s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926368713s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960920334s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926368713s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.943595886s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909278870s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.943580627s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909278870s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960314751s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926383972s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960297585s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926383972s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968079567s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934272766s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968064308s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934272766s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960093498s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926414490s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960080147s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926414490s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959667206s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926391602s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959650040s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926391602s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.967455864s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934303284s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.967440605s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934303284s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959156990s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926422119s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959138870s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926422119s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966730118s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934188843s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966711044s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934188843s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958840370s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926429749s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.941487312s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909187317s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.941465378s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909187317s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966361046s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934234619s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966345787s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934234619s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962287903s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925605774s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957566261s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925605774s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958274841s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926429749s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958259583s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958201408s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926445007s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958189964s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926445007s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965899467s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934219360s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965879440s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934219360s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965804100s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934226990s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965792656s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934226990s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957959175s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926460266s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957945824s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926460266s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965680122s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934288025s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965665817s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934288025s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940426826s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909156799s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940408707s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940348625s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909233093s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940299988s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909233093s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973402023s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222656250s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973369598s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973158836s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.222549438s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973132133s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.222549438s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971399307s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.220855713s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972857475s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222450256s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972840309s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222450256s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972937584s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222656250s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972923279s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972858429s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222679138s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972841263s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222679138s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972868919s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222824097s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972855568s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222824097s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973010063s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223129272s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972805977s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222976685s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972786903s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222976685s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972826004s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222915649s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972607613s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222915649s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972652435s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223014832s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.955779076s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972763062s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223129272s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972627640s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223014832s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972572327s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223045349s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972557068s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223045349s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972480774s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223052979s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972464561s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223052979s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972475052s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223068237s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972452164s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223068237s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972394943s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223098755s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972373962s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223098755s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972376823s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223136902s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972361565s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223136902s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972357750s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223175049s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972344398s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223175049s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972307205s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223182678s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972287178s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223182678s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972043037s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222984314s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972032547s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222984314s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971755981s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223205566s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971536636s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223205566s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972020149s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223991394s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971908569s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223991394s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970626831s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223220825s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971295357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.220855713s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970546722s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223220825s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.934625626s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909156799s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.934603691s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:39:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 26 12:39:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 26 12:39:50 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 44'48 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 44'50 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 44'46 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 44'54 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:51 compute-0 ceph-mon[74966]: 2.5 scrub starts
Nov 26 12:39:51 compute-0 ceph-mon[74966]: 2.5 scrub ok
Nov 26 12:39:51 compute-0 ceph-mon[74966]: pgmap v107: 305 pgs: 305 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:39:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:39:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:39:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 26 12:39:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:39:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 26 12:39:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:39:51 compute-0 ceph-mon[74966]: osdmap e50: 3 total, 3 up, 3 in
Nov 26 12:39:51 compute-0 ceph-mon[74966]: osdmap e51: 3 total, 3 up, 3 in
Nov 26 12:39:51 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 305 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 1 objects/s recovering
Nov 26 12:39:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 26 12:39:51 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 26 12:39:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 26 12:39:51 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 26 12:39:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 26 12:39:51 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 26 12:39:51 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 26 12:39:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 26 12:39:51 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 26 12:39:52 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 26 12:39:52 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 26 12:39:52 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 26 12:39:52 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 26 12:39:52 compute-0 ceph-mon[74966]: osdmap e52: 3 total, 3 up, 3 in
Nov 26 12:39:52 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 26 12:39:52 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090338707s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.222427368s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:52 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091548920s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.223670959s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:52 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090310097s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222427368s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:52 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091526985s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:52 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090384483s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.222587585s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:52 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090363503s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222587585s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:52 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091360092s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.223678589s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:52 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091345787s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223678589s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:52 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 26 12:39:53 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 26 12:39:53 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 26 12:39:53 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 26 12:39:53 compute-0 ceph-mon[74966]: pgmap v110: 305 pgs: 305 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 1 objects/s recovering
Nov 26 12:39:53 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.853003502s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815834045s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:53 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852621078s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815834045s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:53 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852263451s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815780640s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:53 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852128983s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815803528s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:53 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852112770s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815803528s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:53 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852163315s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815780640s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:53 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:53 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:53 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:53 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:53 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:53 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:53 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:53 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 lc 33'17 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:53 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:53 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:53 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 26 12:39:53 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 26 12:39:53 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v113: 305 pgs: 305 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 222 B/s, 2 objects/s recovering
Nov 26 12:39:53 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 26 12:39:53 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 26 12:39:53 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 26 12:39:53 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 26 12:39:54 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 26 12:39:54 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 26 12:39:54 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 26 12:39:54 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.915017128s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active pruub 106.882560730s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.848460197s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816101074s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.848379135s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816101074s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914970398s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882560730s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847923279s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815940857s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847882271s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815940857s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914413452s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active pruub 106.882606506s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847566605s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815841675s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847519875s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815841675s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914331436s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882606506s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847449303s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815879822s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847414017s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815879822s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.918402672s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active pruub 106.887001038s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847307205s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815971375s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847074509s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815902710s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846937180s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815902710s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847107887s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816116333s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847078323s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816116333s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847105026s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815971375s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.918376923s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887001038s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.917899132s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active pruub 106.887329102s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.917868614s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887329102s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846574783s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816085815s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847175598s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816719055s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846515656s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816139221s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846462250s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816085815s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847414970s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.817054749s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846486092s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816139221s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847385406s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.817054749s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846344948s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816062927s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846327782s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816062927s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846558571s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815994263s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846675873s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816719055s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.845700264s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815994263s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:54 compute-0 ceph-mon[74966]: 2.13 scrub starts
Nov 26 12:39:54 compute-0 ceph-mon[74966]: 2.13 scrub ok
Nov 26 12:39:54 compute-0 ceph-mon[74966]: osdmap e53: 3 total, 3 up, 3 in
Nov 26 12:39:54 compute-0 ceph-mon[74966]: 3.11 scrub starts
Nov 26 12:39:54 compute-0 ceph-mon[74966]: 3.11 scrub ok
Nov 26 12:39:54 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 26 12:39:54 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:54 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:54 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 26 12:39:54 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 26 12:39:54 compute-0 sshd-session[102604]: Accepted publickey for zuul from 192.168.122.30 port 49372 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:39:54 compute-0 systemd-logind[777]: New session 33 of user zuul.
Nov 26 12:39:54 compute-0 systemd[1]: Started Session 33 of User zuul.
Nov 26 12:39:54 compute-0 sshd-session[102604]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:39:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 26 12:39:55 compute-0 ceph-mon[74966]: pgmap v113: 305 pgs: 305 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 222 B/s, 2 objects/s recovering
Nov 26 12:39:55 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 26 12:39:55 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 26 12:39:55 compute-0 ceph-mon[74966]: osdmap e54: 3 total, 3 up, 3 in
Nov 26 12:39:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 26 12:39:55 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:55 compute-0 python3.9[102757]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:39:55 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v116: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 130 B/s, 1 objects/s recovering
Nov 26 12:39:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 26 12:39:55 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 26 12:39:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 26 12:39:55 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 26 12:39:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:39:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 26 12:39:56 compute-0 ceph-mon[74966]: 2.6 scrub starts
Nov 26 12:39:56 compute-0 ceph-mon[74966]: 2.6 scrub ok
Nov 26 12:39:56 compute-0 ceph-mon[74966]: osdmap e55: 3 total, 3 up, 3 in
Nov 26 12:39:56 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 26 12:39:56 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 26 12:39:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 26 12:39:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 26 12:39:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 26 12:39:56 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 26 12:39:56 compute-0 sudo[102973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keoaaeoyejyjadyrnlywihtbwwsqjabe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160796.364306-32-174588754340139/AnsiballZ_command.py'
Nov 26 12:39:56 compute-0 sudo[102973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:39:56 compute-0 python3.9[102975]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:39:56 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 26 12:39:56 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 26 12:39:57 compute-0 ceph-mon[74966]: pgmap v116: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 130 B/s, 1 objects/s recovering
Nov 26 12:39:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 26 12:39:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 26 12:39:57 compute-0 ceph-mon[74966]: osdmap e56: 3 total, 3 up, 3 in
Nov 26 12:39:57 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.764075279s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.222412109s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:57 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.763993263s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222412109s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:57 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.764774323s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.223670959s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:57 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.764751434s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:57 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:57 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:57 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v118: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 835 B/s, 4 keys/s, 22 objects/s recovering
Nov 26 12:39:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 26 12:39:57 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 26 12:39:57 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 26 12:39:57 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 26 12:39:57 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 26 12:39:57 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 26 12:39:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 26 12:39:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 26 12:39:58 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 26 12:39:58 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 26 12:39:58 compute-0 ceph-mon[74966]: 5.14 scrub starts
Nov 26 12:39:58 compute-0 ceph-mon[74966]: 5.14 scrub ok
Nov 26 12:39:58 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 26 12:39:58 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 26 12:39:58 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 26 12:39:58 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884973526s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active pruub 106.880828857s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:58 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884924889s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880828857s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:58 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884659767s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active pruub 106.880821228s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:39:58 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884625435s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880821228s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:39:58 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 lc 33'16 (0'0,37'39] local-lis/les=56/57 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:58 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:58 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:39:58 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=56/57 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:58 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 26 12:39:58 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 26 12:39:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 26 12:39:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 26 12:39:59 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 26 12:39:59 compute-0 ceph-mon[74966]: pgmap v118: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 835 B/s, 4 keys/s, 22 objects/s recovering
Nov 26 12:39:59 compute-0 ceph-mon[74966]: 5.15 scrub starts
Nov 26 12:39:59 compute-0 ceph-mon[74966]: 5.15 scrub ok
Nov 26 12:39:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 26 12:39:59 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 26 12:39:59 compute-0 ceph-mon[74966]: osdmap e57: 3 total, 3 up, 3 in
Nov 26 12:39:59 compute-0 ceph-mon[74966]: 3.e scrub starts
Nov 26 12:39:59 compute-0 ceph-mon[74966]: 3.e scrub ok
Nov 26 12:39:59 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:59 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:39:59 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 26 12:39:59 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 26 12:39:59 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 26 12:39:59 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 26 12:39:59 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s, 4 keys/s, 21 objects/s recovering
Nov 26 12:39:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 26 12:39:59 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 26 12:39:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 26 12:39:59 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 26 12:40:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 26 12:40:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 26 12:40:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 26 12:40:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 26 12:40:00 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 26 12:40:00 compute-0 ceph-mon[74966]: osdmap e58: 3 total, 3 up, 3 in
Nov 26 12:40:00 compute-0 ceph-mon[74966]: 3.18 scrub starts
Nov 26 12:40:00 compute-0 ceph-mon[74966]: 3.18 scrub ok
Nov 26 12:40:00 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 26 12:40:00 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607535362s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.925239563s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607491493s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925239563s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:00 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607428551s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.926414490s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607377052s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.926414490s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:00 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.613403320s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.932228088s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.605746269s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.925514221s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.612406731s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.932228088s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.605547905s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925514221s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:00 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:00 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:00 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Nov 26 12:40:00 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Nov 26 12:40:00 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 26 12:40:00 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 26 12:40:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:40:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 26 12:40:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 26 12:40:00 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:00 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:00 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:00 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:00 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:00 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:00 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:00 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:00 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:00 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:01 compute-0 ceph-mon[74966]: 2.16 scrub starts
Nov 26 12:40:01 compute-0 ceph-mon[74966]: 2.16 scrub ok
Nov 26 12:40:01 compute-0 ceph-mon[74966]: pgmap v121: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s, 4 keys/s, 21 objects/s recovering
Nov 26 12:40:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 26 12:40:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 26 12:40:01 compute-0 ceph-mon[74966]: osdmap e59: 3 total, 3 up, 3 in
Nov 26 12:40:01 compute-0 ceph-mon[74966]: 4.1b deep-scrub starts
Nov 26 12:40:01 compute-0 ceph-mon[74966]: 4.1b deep-scrub ok
Nov 26 12:40:01 compute-0 ceph-mon[74966]: osdmap e60: 3 total, 3 up, 3 in
Nov 26 12:40:01 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 26 12:40:01 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 26 12:40:01 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 26 12:40:01 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 26 12:40:01 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v124: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 430 B/s, 2 objects/s recovering
Nov 26 12:40:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 26 12:40:01 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 26 12:40:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 26 12:40:01 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 26 12:40:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 26 12:40:01 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 26 12:40:01 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 26 12:40:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 26 12:40:01 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 26 12:40:02 compute-0 ceph-mon[74966]: 2.8 scrub starts
Nov 26 12:40:02 compute-0 ceph-mon[74966]: 2.8 scrub ok
Nov 26 12:40:02 compute-0 ceph-mon[74966]: 4.1c scrub starts
Nov 26 12:40:02 compute-0 ceph-mon[74966]: 4.1c scrub ok
Nov 26 12:40:02 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 26 12:40:02 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 26 12:40:02 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 26 12:40:02 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 26 12:40:02 compute-0 ceph-mon[74966]: osdmap e61: 3 total, 3 up, 3 in
Nov 26 12:40:02 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:02 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:02 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:02 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:02 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.2 deep-scrub starts
Nov 26 12:40:02 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.2 deep-scrub ok
Nov 26 12:40:02 compute-0 sudo[102973]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 26 12:40:03 compute-0 ceph-mon[74966]: 5.1 scrub starts
Nov 26 12:40:03 compute-0 ceph-mon[74966]: 5.1 scrub ok
Nov 26 12:40:03 compute-0 ceph-mon[74966]: pgmap v124: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 430 B/s, 2 objects/s recovering
Nov 26 12:40:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 26 12:40:03 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 26 12:40:03 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:03 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:03 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:03 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:03 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:03 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:03 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:03 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:03 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415146828s) [2] async=[2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 118.441520691s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:03 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415066719s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.441520691s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:03 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416400909s) [2] async=[2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 118.443046570s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:03 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416225433s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443046570s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:03 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416756630s) [2] async=[2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 118.443283081s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:03 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416030884s) [2] async=[2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 118.443237305s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:03 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415893555s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443283081s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:03 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415829659s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443237305s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:03 compute-0 sshd-session[102607]: Connection closed by 192.168.122.30 port 49372
Nov 26 12:40:03 compute-0 sshd-session[102604]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:40:03 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Nov 26 12:40:03 compute-0 systemd[1]: session-33.scope: Consumed 6.529s CPU time.
Nov 26 12:40:03 compute-0 systemd-logind[777]: Session 33 logged out. Waiting for processes to exit.
Nov 26 12:40:03 compute-0 systemd-logind[777]: Removed session 33.
Nov 26 12:40:03 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.765221596s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 active pruub 122.301292419s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:03 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.765181541s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.301292419s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:03 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.767482758s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 active pruub 122.303794861s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:03 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.767401695s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303794861s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:03 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=14.757040977s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=44'389 mlcod 0'0 active pruub 121.293930054s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:03 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.766945839s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 active pruub 122.303878784s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:03 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.766919136s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303878784s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:03 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=14.756837845s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 121.293930054s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:03 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:03 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:03 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:03 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:03 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v127: 305 pgs: 4 unknown, 4 peering, 297 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 26 12:40:04 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 26 12:40:04 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 26 12:40:04 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 26 12:40:04 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:04 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:04 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:04 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:04 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:04 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:04 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:04 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:04 compute-0 ceph-mon[74966]: 5.2 deep-scrub starts
Nov 26 12:40:04 compute-0 ceph-mon[74966]: 5.2 deep-scrub ok
Nov 26 12:40:04 compute-0 ceph-mon[74966]: osdmap e62: 3 total, 3 up, 3 in
Nov 26 12:40:04 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:04 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:04 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:04 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:04 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:04 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:04 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:04 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:04 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:04 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:04 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:04 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:04 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 26 12:40:04 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 26 12:40:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 26 12:40:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 26 12:40:05 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 26 12:40:05 compute-0 ceph-mon[74966]: pgmap v127: 305 pgs: 4 unknown, 4 peering, 297 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 26 12:40:05 compute-0 ceph-mon[74966]: osdmap e63: 3 total, 3 up, 3 in
Nov 26 12:40:05 compute-0 ceph-mon[74966]: 4.1 scrub starts
Nov 26 12:40:05 compute-0 ceph-mon[74966]: 4.1 scrub ok
Nov 26 12:40:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:05 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 26 12:40:05 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 26 12:40:05 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v130: 305 pgs: 4 unknown, 4 peering, 297 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:40:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:40:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:40:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:40:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:40:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:40:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:40:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 26 12:40:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 26 12:40:05 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 26 12:40:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65 pruub=15.676793098s) [2] async=[2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 44'389 active pruub 124.874816895s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676989555s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 44'389 active pruub 124.874961853s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.678121567s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 44'389 active pruub 124.876121521s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65 pruub=15.676681519s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874816895s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.677809715s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.876121521s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676671028s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874961853s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676982880s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 44'389 active pruub 124.874923706s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676142693s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874923706s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:06 compute-0 ceph-mon[74966]: osdmap e64: 3 total, 3 up, 3 in
Nov 26 12:40:06 compute-0 ceph-mon[74966]: osdmap e65: 3 total, 3 up, 3 in
Nov 26 12:40:06 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 26 12:40:06 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 26 12:40:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 26 12:40:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 26 12:40:06 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 26 12:40:06 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:06 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:06 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:06 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:07 compute-0 ceph-mon[74966]: 2.b scrub starts
Nov 26 12:40:07 compute-0 ceph-mon[74966]: 2.b scrub ok
Nov 26 12:40:07 compute-0 ceph-mon[74966]: pgmap v130: 305 pgs: 4 unknown, 4 peering, 297 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:07 compute-0 ceph-mon[74966]: 4.a scrub starts
Nov 26 12:40:07 compute-0 ceph-mon[74966]: 4.a scrub ok
Nov 26 12:40:07 compute-0 ceph-mon[74966]: osdmap e66: 3 total, 3 up, 3 in
Nov 26 12:40:07 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 26 12:40:07 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 26 12:40:07 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v133: 305 pgs: 4 unknown, 4 peering, 297 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:08 compute-0 ceph-mon[74966]: 2.9 scrub starts
Nov 26 12:40:08 compute-0 ceph-mon[74966]: 2.9 scrub ok
Nov 26 12:40:08 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 26 12:40:08 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 26 12:40:09 compute-0 ceph-mon[74966]: pgmap v133: 305 pgs: 4 unknown, 4 peering, 297 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:09 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v134: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 708 B/s wr, 23 op/s; 190 B/s, 6 objects/s recovering
Nov 26 12:40:09 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 26 12:40:09 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 26 12:40:09 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 26 12:40:09 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 26 12:40:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 26 12:40:10 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 26 12:40:10 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 26 12:40:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 26 12:40:10 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 26 12:40:10 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.871047974s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 124.926139832s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:10 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.870986938s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.926139832s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:10 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.873212814s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 124.928573608s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:10 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.873172760s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.928573608s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:10 compute-0 ceph-mon[74966]: 2.1f scrub starts
Nov 26 12:40:10 compute-0 ceph-mon[74966]: 2.1f scrub ok
Nov 26 12:40:10 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 26 12:40:10 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 26 12:40:10 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:10 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:10 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67 pruub=12.363718033s) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 126.222518921s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:10 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67 pruub=12.363677025s) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.222518921s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:10 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:40:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 26 12:40:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 26 12:40:10 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 26 12:40:10 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:10 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:10 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:10 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:10 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:10 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:10 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:10 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:10 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:11 compute-0 ceph-mon[74966]: pgmap v134: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 708 B/s wr, 23 op/s; 190 B/s, 6 objects/s recovering
Nov 26 12:40:11 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 26 12:40:11 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 26 12:40:11 compute-0 ceph-mon[74966]: osdmap e67: 3 total, 3 up, 3 in
Nov 26 12:40:11 compute-0 ceph-mon[74966]: osdmap e68: 3 total, 3 up, 3 in
Nov 26 12:40:11 compute-0 sudo[103032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:40:11 compute-0 sudo[103032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:11 compute-0 sudo[103032]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:11 compute-0 sudo[103057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:40:11 compute-0 sudo[103057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:11 compute-0 sudo[103057]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:11 compute-0 sudo[103082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:40:11 compute-0 sudo[103082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:11 compute-0 sudo[103082]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:11 compute-0 sudo[103107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:40:11 compute-0 sudo[103107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:11 compute-0 sudo[103107]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:40:11 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:40:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:40:11 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:40:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:40:11 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:40:11 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev bb324a4e-04ca-4ac3-bfe8-f63a00d6650c does not exist
Nov 26 12:40:11 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 61eebe65-bda6-43c3-87e1-5566fee4934e does not exist
Nov 26 12:40:11 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 76985d16-1c68-4761-9785-8967082678c9 does not exist
Nov 26 12:40:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:40:11 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:40:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:40:11 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:40:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:40:11 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:40:11 compute-0 sudo[103162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:40:11 compute-0 sudo[103162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:11 compute-0 sudo[103162]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:11 compute-0 sudo[103187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:40:11 compute-0 sudo[103187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:11 compute-0 sudo[103187]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:11 compute-0 sudo[103212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:40:11 compute-0 sudo[103212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:11 compute-0 sudo[103212]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:11 compute-0 sudo[103237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:40:11 compute-0 sudo[103237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:11 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v137: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 691 B/s wr, 23 op/s; 185 B/s, 6 objects/s recovering
Nov 26 12:40:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 26 12:40:11 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 26 12:40:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 26 12:40:11 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 26 12:40:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 26 12:40:11 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 26 12:40:11 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 26 12:40:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 26 12:40:11 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 26 12:40:11 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69 pruub=11.003540039s) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 122.887435913s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:11 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69 pruub=11.003467560s) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.887435913s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:11 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:12 compute-0 podman[103292]: 2025-11-26 12:40:12.075956682 +0000 UTC m=+0.027584550 container create 19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:40:12 compute-0 systemd[1]: Started libpod-conmon-19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5.scope.
Nov 26 12:40:12 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:40:12 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:40:12 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:40:12 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:40:12 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:40:12 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:40:12 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:40:12 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 26 12:40:12 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 26 12:40:12 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 26 12:40:12 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 26 12:40:12 compute-0 ceph-mon[74966]: osdmap e69: 3 total, 3 up, 3 in
Nov 26 12:40:12 compute-0 podman[103292]: 2025-11-26 12:40:12.128329441 +0000 UTC m=+0.079957310 container init 19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mccarthy, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:40:12 compute-0 podman[103292]: 2025-11-26 12:40:12.132942243 +0000 UTC m=+0.084570110 container start 19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mccarthy, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:40:12 compute-0 podman[103292]: 2025-11-26 12:40:12.133967385 +0000 UTC m=+0.085595253 container attach 19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 12:40:12 compute-0 brave_mccarthy[103305]: 167 167
Nov 26 12:40:12 compute-0 systemd[1]: libpod-19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5.scope: Deactivated successfully.
Nov 26 12:40:12 compute-0 podman[103292]: 2025-11-26 12:40:12.137465656 +0000 UTC m=+0.089093525 container died 19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:40:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-5736097c2d694b8708a2d28f42edbe7700ffbd3d7455716d260e3d723feef7f1-merged.mount: Deactivated successfully.
Nov 26 12:40:12 compute-0 podman[103292]: 2025-11-26 12:40:12.154272397 +0000 UTC m=+0.105900264 container remove 19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mccarthy, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:40:12 compute-0 podman[103292]: 2025-11-26 12:40:12.06412156 +0000 UTC m=+0.015749428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:40:12 compute-0 systemd[1]: libpod-conmon-19be1902a608eedb3cd018a7d14fef4e8b047fb683f40f55dc182b0678a78cd5.scope: Deactivated successfully.
Nov 26 12:40:12 compute-0 podman[103326]: 2025-11-26 12:40:12.268436864 +0000 UTC m=+0.029800667 container create da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_wilson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:40:12 compute-0 systemd[1]: Started libpod-conmon-da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea.scope.
Nov 26 12:40:12 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501435377abaece7c92bec312f4e3b34dcad36eea7783a720c22af11db9282c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501435377abaece7c92bec312f4e3b34dcad36eea7783a720c22af11db9282c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501435377abaece7c92bec312f4e3b34dcad36eea7783a720c22af11db9282c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501435377abaece7c92bec312f4e3b34dcad36eea7783a720c22af11db9282c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501435377abaece7c92bec312f4e3b34dcad36eea7783a720c22af11db9282c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:40:12 compute-0 podman[103326]: 2025-11-26 12:40:12.333704783 +0000 UTC m=+0.095068585 container init da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:40:12 compute-0 podman[103326]: 2025-11-26 12:40:12.337831158 +0000 UTC m=+0.099194961 container start da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_wilson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:40:12 compute-0 podman[103326]: 2025-11-26 12:40:12.340309519 +0000 UTC m=+0.101673341 container attach da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 26 12:40:12 compute-0 podman[103326]: 2025-11-26 12:40:12.256736326 +0000 UTC m=+0.018100128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:40:12 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 26 12:40:12 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 26 12:40:12 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 26 12:40:12 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:12 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:12 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 26 12:40:12 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 26 12:40:12 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 26 12:40:12 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:12 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 26 12:40:12 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.671307564s) [2] async=[2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 128.558670044s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:12 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.669959068s) [2] async=[2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 128.557617188s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:12 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.669912338s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.557617188s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:12 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.670838356s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.558670044s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:12 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:12 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:12 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:12 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:13 compute-0 ceph-mon[74966]: pgmap v137: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 691 B/s wr, 23 op/s; 185 B/s, 6 objects/s recovering
Nov 26 12:40:13 compute-0 ceph-mon[74966]: 5.f scrub starts
Nov 26 12:40:13 compute-0 ceph-mon[74966]: 5.f scrub ok
Nov 26 12:40:13 compute-0 ceph-mon[74966]: 4.e scrub starts
Nov 26 12:40:13 compute-0 ceph-mon[74966]: 4.e scrub ok
Nov 26 12:40:13 compute-0 ceph-mon[74966]: osdmap e70: 3 total, 3 up, 3 in
Nov 26 12:40:13 compute-0 elastic_wilson[103339]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:40:13 compute-0 elastic_wilson[103339]: --> relative data size: 1.0
Nov 26 12:40:13 compute-0 elastic_wilson[103339]: --> All data devices are unavailable
Nov 26 12:40:13 compute-0 systemd[1]: libpod-da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea.scope: Deactivated successfully.
Nov 26 12:40:13 compute-0 podman[103326]: 2025-11-26 12:40:13.15915303 +0000 UTC m=+0.920516832 container died da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_wilson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:40:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-501435377abaece7c92bec312f4e3b34dcad36eea7783a720c22af11db9282c6-merged.mount: Deactivated successfully.
Nov 26 12:40:13 compute-0 podman[103326]: 2025-11-26 12:40:13.19056379 +0000 UTC m=+0.951927591 container remove da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_wilson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:40:13 compute-0 systemd[1]: libpod-conmon-da7289a9b28b4496a2c89e9f5eef9ad9cb04bd2d11fee6e67007d6f955c7e0ea.scope: Deactivated successfully.
Nov 26 12:40:13 compute-0 sudo[103237]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:13 compute-0 sudo[103378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:40:13 compute-0 sudo[103378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:13 compute-0 sudo[103378]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:13 compute-0 sudo[103403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:40:13 compute-0 sudo[103403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:13 compute-0 sudo[103403]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:13 compute-0 sudo[103428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:40:13 compute-0 sudo[103428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:13 compute-0 sudo[103428]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:13 compute-0 sudo[103453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:40:13 compute-0 sudo[103453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:13 compute-0 podman[103509]: 2025-11-26 12:40:13.612900085 +0000 UTC m=+0.028164353 container create 79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:40:13 compute-0 systemd[1]: Started libpod-conmon-79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471.scope.
Nov 26 12:40:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:40:13 compute-0 podman[103509]: 2025-11-26 12:40:13.658733032 +0000 UTC m=+0.073997319 container init 79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:40:13 compute-0 podman[103509]: 2025-11-26 12:40:13.663828003 +0000 UTC m=+0.079092270 container start 79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:40:13 compute-0 podman[103509]: 2025-11-26 12:40:13.665022433 +0000 UTC m=+0.080286730 container attach 79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:40:13 compute-0 hopeful_montalcini[103523]: 167 167
Nov 26 12:40:13 compute-0 systemd[1]: libpod-79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471.scope: Deactivated successfully.
Nov 26 12:40:13 compute-0 podman[103509]: 2025-11-26 12:40:13.666731033 +0000 UTC m=+0.081995300 container died 79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:40:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c99d2e1c82d6119e7274f186ef7a82b01d388488764159b8e8aa688894bd237-merged.mount: Deactivated successfully.
Nov 26 12:40:13 compute-0 podman[103509]: 2025-11-26 12:40:13.684519463 +0000 UTC m=+0.099783730 container remove 79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_montalcini, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:40:13 compute-0 podman[103509]: 2025-11-26 12:40:13.601787796 +0000 UTC m=+0.017052083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:40:13 compute-0 systemd[1]: libpod-conmon-79aa115b8c529c50df7c9990ac88f801505d44b068af189c4b56e37bf8893471.scope: Deactivated successfully.
Nov 26 12:40:13 compute-0 podman[103545]: 2025-11-26 12:40:13.797813711 +0000 UTC m=+0.028601157 container create 46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 12:40:13 compute-0 systemd[1]: Started libpod-conmon-46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1.scope.
Nov 26 12:40:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66685294b122e5575da20a2082c3357e7851b4f3a43ea9830e75d5ff1d554329/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66685294b122e5575da20a2082c3357e7851b4f3a43ea9830e75d5ff1d554329/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66685294b122e5575da20a2082c3357e7851b4f3a43ea9830e75d5ff1d554329/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66685294b122e5575da20a2082c3357e7851b4f3a43ea9830e75d5ff1d554329/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:40:13 compute-0 podman[103545]: 2025-11-26 12:40:13.849051231 +0000 UTC m=+0.079838667 container init 46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 12:40:13 compute-0 podman[103545]: 2025-11-26 12:40:13.853729157 +0000 UTC m=+0.084516592 container start 46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 12:40:13 compute-0 podman[103545]: 2025-11-26 12:40:13.85519624 +0000 UTC m=+0.085983676 container attach 46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 12:40:13 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v140: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 26 12:40:13 compute-0 podman[103545]: 2025-11-26 12:40:13.786205176 +0000 UTC m=+0.016992612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:40:13 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 26 12:40:13 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 26 12:40:13 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 26 12:40:13 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=70/71 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:13 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=70/71 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]: {
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:     "0": [
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:         {
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "devices": [
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "/dev/loop3"
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             ],
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "lv_name": "ceph_lv0",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "lv_size": "21470642176",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "name": "ceph_lv0",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "tags": {
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.cluster_name": "ceph",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.crush_device_class": "",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.encrypted": "0",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.osd_id": "0",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.type": "block",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.vdo": "0"
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             },
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "type": "block",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "vg_name": "ceph_vg0"
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:         }
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:     ],
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:     "1": [
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:         {
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "devices": [
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "/dev/loop4"
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             ],
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "lv_name": "ceph_lv1",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "lv_size": "21470642176",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "name": "ceph_lv1",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "tags": {
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.cluster_name": "ceph",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.crush_device_class": "",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.encrypted": "0",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.osd_id": "1",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.type": "block",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.vdo": "0"
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             },
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "type": "block",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "vg_name": "ceph_vg1"
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:         }
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:     ],
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:     "2": [
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:         {
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "devices": [
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "/dev/loop5"
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             ],
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "lv_name": "ceph_lv2",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "lv_size": "21470642176",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "name": "ceph_lv2",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "tags": {
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.cluster_name": "ceph",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.crush_device_class": "",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.encrypted": "0",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.osd_id": "2",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.type": "block",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:                 "ceph.vdo": "0"
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             },
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "type": "block",
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:             "vg_name": "ceph_vg2"
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:         }
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]:     ]
Nov 26 12:40:14 compute-0 nifty_kapitsa[103558]: }
Nov 26 12:40:14 compute-0 systemd[1]: libpod-46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1.scope: Deactivated successfully.
Nov 26 12:40:14 compute-0 conmon[103558]: conmon 46ba957521a473ce1f43 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1.scope/container/memory.events
Nov 26 12:40:14 compute-0 podman[103567]: 2025-11-26 12:40:14.527795874 +0000 UTC m=+0.019008851 container died 46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:40:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-66685294b122e5575da20a2082c3357e7851b4f3a43ea9830e75d5ff1d554329-merged.mount: Deactivated successfully.
Nov 26 12:40:14 compute-0 podman[103567]: 2025-11-26 12:40:14.56061605 +0000 UTC m=+0.051829026 container remove 46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:40:14 compute-0 systemd[1]: libpod-conmon-46ba957521a473ce1f43b995071c2454ec0d51b792b85ebfd2203aa1532a9be1.scope: Deactivated successfully.
Nov 26 12:40:14 compute-0 sudo[103453]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:14 compute-0 sudo[103578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:40:14 compute-0 sudo[103578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:14 compute-0 sudo[103578]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:14 compute-0 sudo[103603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:40:14 compute-0 sudo[103603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:14 compute-0 sudo[103603]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:14 compute-0 sudo[103628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:40:14 compute-0 sudo[103628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:14 compute-0 sudo[103628]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:14 compute-0 sudo[103653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:40:14 compute-0 sudo[103653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:14 compute-0 ceph-mon[74966]: pgmap v140: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 26 12:40:14 compute-0 ceph-mon[74966]: osdmap e71: 3 total, 3 up, 3 in
Nov 26 12:40:14 compute-0 podman[103709]: 2025-11-26 12:40:14.976801265 +0000 UTC m=+0.026638889 container create 4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wilbur, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:40:15 compute-0 systemd[1]: Started libpod-conmon-4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5.scope.
Nov 26 12:40:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:40:15 compute-0 podman[103709]: 2025-11-26 12:40:15.024388346 +0000 UTC m=+0.074225980 container init 4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wilbur, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 12:40:15 compute-0 podman[103709]: 2025-11-26 12:40:15.029401844 +0000 UTC m=+0.079239458 container start 4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wilbur, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 12:40:15 compute-0 podman[103709]: 2025-11-26 12:40:15.030567851 +0000 UTC m=+0.080405485 container attach 4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wilbur, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:40:15 compute-0 magical_wilbur[103722]: 167 167
Nov 26 12:40:15 compute-0 systemd[1]: libpod-4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5.scope: Deactivated successfully.
Nov 26 12:40:15 compute-0 podman[103709]: 2025-11-26 12:40:15.03309347 +0000 UTC m=+0.082931104 container died 4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wilbur, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:40:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-28011ddee321da2f9ba04817238e3ff4daf1c1aeeb8d3d38c897f5ef0262d66d-merged.mount: Deactivated successfully.
Nov 26 12:40:15 compute-0 podman[103709]: 2025-11-26 12:40:15.05436296 +0000 UTC m=+0.104200574 container remove 4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wilbur, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 12:40:15 compute-0 podman[103709]: 2025-11-26 12:40:14.966436033 +0000 UTC m=+0.016273667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:40:15 compute-0 systemd[1]: libpod-conmon-4d8f5c9d06df3171fa71768ba8880d2c2a9510e320b618eab330aa14a8247cb5.scope: Deactivated successfully.
Nov 26 12:40:15 compute-0 podman[103743]: 2025-11-26 12:40:15.170324844 +0000 UTC m=+0.028638376 container create d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:40:15 compute-0 systemd[1]: Started libpod-conmon-d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515.scope.
Nov 26 12:40:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c703cd0701cff6d820cbb217a232cf7baf076ab2c233fe8b65d957325790e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c703cd0701cff6d820cbb217a232cf7baf076ab2c233fe8b65d957325790e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c703cd0701cff6d820cbb217a232cf7baf076ab2c233fe8b65d957325790e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c703cd0701cff6d820cbb217a232cf7baf076ab2c233fe8b65d957325790e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:40:15 compute-0 podman[103743]: 2025-11-26 12:40:15.230000425 +0000 UTC m=+0.088313967 container init d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:40:15 compute-0 podman[103743]: 2025-11-26 12:40:15.235439815 +0000 UTC m=+0.093753347 container start d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:40:15 compute-0 podman[103743]: 2025-11-26 12:40:15.236602936 +0000 UTC m=+0.094916469 container attach d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:40:15 compute-0 podman[103743]: 2025-11-26 12:40:15.157926912 +0000 UTC m=+0.016240464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:40:15 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 26 12:40:15 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 26 12:40:15 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v142: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 44 B/s, 2 objects/s recovering
Nov 26 12:40:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:40:15 compute-0 eager_hopper[103756]: {
Nov 26 12:40:15 compute-0 eager_hopper[103756]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:40:15 compute-0 eager_hopper[103756]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:40:15 compute-0 eager_hopper[103756]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:40:15 compute-0 eager_hopper[103756]:         "osd_id": 1,
Nov 26 12:40:15 compute-0 eager_hopper[103756]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:40:15 compute-0 eager_hopper[103756]:         "type": "bluestore"
Nov 26 12:40:15 compute-0 eager_hopper[103756]:     },
Nov 26 12:40:15 compute-0 eager_hopper[103756]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:40:15 compute-0 eager_hopper[103756]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:40:15 compute-0 eager_hopper[103756]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:40:15 compute-0 eager_hopper[103756]:         "osd_id": 2,
Nov 26 12:40:15 compute-0 eager_hopper[103756]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:40:15 compute-0 eager_hopper[103756]:         "type": "bluestore"
Nov 26 12:40:15 compute-0 eager_hopper[103756]:     },
Nov 26 12:40:15 compute-0 eager_hopper[103756]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:40:15 compute-0 eager_hopper[103756]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:40:15 compute-0 eager_hopper[103756]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:40:15 compute-0 eager_hopper[103756]:         "osd_id": 0,
Nov 26 12:40:15 compute-0 eager_hopper[103756]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:40:15 compute-0 eager_hopper[103756]:         "type": "bluestore"
Nov 26 12:40:15 compute-0 eager_hopper[103756]:     }
Nov 26 12:40:15 compute-0 eager_hopper[103756]: }
Nov 26 12:40:16 compute-0 systemd[1]: libpod-d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515.scope: Deactivated successfully.
Nov 26 12:40:16 compute-0 conmon[103756]: conmon d021722196b6bf946e8f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515.scope/container/memory.events
Nov 26 12:40:16 compute-0 podman[103743]: 2025-11-26 12:40:16.013483735 +0000 UTC m=+0.871797267 container died d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 12:40:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-80c703cd0701cff6d820cbb217a232cf7baf076ab2c233fe8b65d957325790e9-merged.mount: Deactivated successfully.
Nov 26 12:40:16 compute-0 podman[103743]: 2025-11-26 12:40:16.044090409 +0000 UTC m=+0.902403951 container remove d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 12:40:16 compute-0 systemd[1]: libpod-conmon-d021722196b6bf946e8f5ae557df9262a1bb2e7809dc96b8d8921fb448447515.scope: Deactivated successfully.
Nov 26 12:40:16 compute-0 sudo[103653]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:40:16 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:40:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:40:16 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:40:16 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev d990967c-022f-469c-8e75-cef71d9108fa does not exist
Nov 26 12:40:16 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev f758bd9f-19e9-42a7-b5ae-56d78cc625e6 does not exist
Nov 26 12:40:16 compute-0 sudo[103798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:40:16 compute-0 sudo[103798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:16 compute-0 sudo[103798]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:16 compute-0 sudo[103823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:40:16 compute-0 sudo[103823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:40:16 compute-0 sudo[103823]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:16 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 26 12:40:16 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 26 12:40:16 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 26 12:40:16 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 26 12:40:17 compute-0 ceph-mon[74966]: 5.3 scrub starts
Nov 26 12:40:17 compute-0 ceph-mon[74966]: 5.3 scrub ok
Nov 26 12:40:17 compute-0 ceph-mon[74966]: pgmap v142: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 44 B/s, 2 objects/s recovering
Nov 26 12:40:17 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:40:17 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:40:17 compute-0 ceph-mon[74966]: 4.11 scrub starts
Nov 26 12:40:17 compute-0 ceph-mon[74966]: 4.11 scrub ok
Nov 26 12:40:17 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 26 12:40:18 compute-0 ceph-mon[74966]: 5.c scrub starts
Nov 26 12:40:18 compute-0 ceph-mon[74966]: 5.c scrub ok
Nov 26 12:40:18 compute-0 sshd-session[103848]: Accepted publickey for zuul from 192.168.122.30 port 42018 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:40:18 compute-0 systemd-logind[777]: New session 34 of user zuul.
Nov 26 12:40:18 compute-0 systemd[1]: Started Session 34 of User zuul.
Nov 26 12:40:18 compute-0 sshd-session[103848]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:40:18 compute-0 python3.9[104001]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 26 12:40:19 compute-0 ceph-mon[74966]: pgmap v143: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 26 12:40:19 compute-0 python3.9[104175]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:40:19 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v144: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 26 12:40:19 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 26 12:40:19 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 26 12:40:19 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 26 12:40:19 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 26 12:40:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 26 12:40:20 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 26 12:40:20 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 26 12:40:20 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 26 12:40:20 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 26 12:40:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 26 12:40:20 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 26 12:40:20 compute-0 sudo[104329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltavdxuqfsbvewhymqoxnvqodkmxdmer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160819.9237263-45-102935264130803/AnsiballZ_command.py'
Nov 26 12:40:20 compute-0 sudo[104329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:40:20 compute-0 python3.9[104331]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:40:20 compute-0 sudo[104329]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:20 compute-0 sudo[104482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuchzoqtlgbndozbxmpbttlyulszxcym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160820.6185188-57-28397293803678/AnsiballZ_stat.py'
Nov 26 12:40:20 compute-0 sudo[104482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:40:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:40:21 compute-0 python3.9[104484]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:40:21 compute-0 sudo[104482]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:21 compute-0 ceph-mon[74966]: pgmap v144: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 26 12:40:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 26 12:40:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 26 12:40:21 compute-0 ceph-mon[74966]: osdmap e72: 3 total, 3 up, 3 in
Nov 26 12:40:21 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=11.746877670s) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 132.966613770s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:21 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:21 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=11.746830940s) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.966613770s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:21 compute-0 sudo[104636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgdslchowyclufzjuoghyeihecmwdkyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160821.2365377-68-7331762324326/AnsiballZ_file.py'
Nov 26 12:40:21 compute-0 sudo[104636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:40:21 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 26 12:40:21 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 26 12:40:21 compute-0 python3.9[104638]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:40:21 compute-0 sudo[104636]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:21 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:22 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 26 12:40:22 compute-0 sudo[104788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbbgzxkipsxfqwagkbqdzcpekfcfwymc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160821.933744-77-88071326179516/AnsiballZ_file.py'
Nov 26 12:40:22 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 26 12:40:22 compute-0 ceph-mon[74966]: 4.18 scrub starts
Nov 26 12:40:22 compute-0 ceph-mon[74966]: 4.18 scrub ok
Nov 26 12:40:22 compute-0 sudo[104788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:40:22 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 26 12:40:22 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=72/73 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:22 compute-0 python3.9[104790]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:40:22 compute-0 sudo[104788]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:22 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 26 12:40:22 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 26 12:40:22 compute-0 python3.9[104940]: ansible-ansible.builtin.service_facts Invoked
Nov 26 12:40:22 compute-0 network[104957]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 12:40:22 compute-0 network[104958]: 'network-scripts' will be removed from distribution in near future.
Nov 26 12:40:22 compute-0 network[104959]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 12:40:23 compute-0 ceph-mon[74966]: pgmap v146: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:23 compute-0 ceph-mon[74966]: osdmap e73: 3 total, 3 up, 3 in
Nov 26 12:40:23 compute-0 ceph-mon[74966]: 5.1a scrub starts
Nov 26 12:40:23 compute-0 ceph-mon[74966]: 5.1a scrub ok
Nov 26 12:40:23 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:24 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 26 12:40:24 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 26 12:40:24 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 26 12:40:24 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 26 12:40:25 compute-0 python3.9[105219]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:40:25 compute-0 ceph-mon[74966]: pgmap v148: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:25 compute-0 ceph-mon[74966]: 5.18 scrub starts
Nov 26 12:40:25 compute-0 ceph-mon[74966]: 5.18 scrub ok
Nov 26 12:40:25 compute-0 ceph-mon[74966]: 5.5 scrub starts
Nov 26 12:40:25 compute-0 ceph-mon[74966]: 5.5 scrub ok
Nov 26 12:40:25 compute-0 python3.9[105369]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:40:25 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts
Nov 26 12:40:25 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok
Nov 26 12:40:25 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:40:26 compute-0 ceph-mon[74966]: 4.1a deep-scrub starts
Nov 26 12:40:26 compute-0 ceph-mon[74966]: 4.1a deep-scrub ok
Nov 26 12:40:26 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 26 12:40:26 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 26 12:40:26 compute-0 python3.9[105523]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:40:26 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.13 deep-scrub starts
Nov 26 12:40:26 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.13 deep-scrub ok
Nov 26 12:40:27 compute-0 sudo[105679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agqpjmqmmxgkeqgncdzlrwhpqzfyguid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160826.8303778-125-229099381999190/AnsiballZ_setup.py'
Nov 26 12:40:27 compute-0 sudo[105679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:40:27 compute-0 ceph-mon[74966]: pgmap v149: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:27 compute-0 ceph-mon[74966]: 5.1d scrub starts
Nov 26 12:40:27 compute-0 ceph-mon[74966]: 5.1d scrub ok
Nov 26 12:40:27 compute-0 ceph-mon[74966]: 4.13 deep-scrub starts
Nov 26 12:40:27 compute-0 ceph-mon[74966]: 4.13 deep-scrub ok
Nov 26 12:40:27 compute-0 python3.9[105681]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:40:27 compute-0 sudo[105679]: pam_unix(sudo:session): session closed for user root
Nov 26 12:40:27 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 26 12:40:27 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 26 12:40:27 compute-0 sudo[105763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhrcjdqatwxhaywyawecozrngwmdbtwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160826.8303778-125-229099381999190/AnsiballZ_dnf.py'
Nov 26 12:40:27 compute-0 sudo[105763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:40:27 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:27 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 26 12:40:27 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 26 12:40:27 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 26 12:40:27 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 26 12:40:27 compute-0 python3.9[105765]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:40:28 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 26 12:40:28 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 26 12:40:28 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 26 12:40:28 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 26 12:40:28 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 26 12:40:28 compute-0 ceph-mon[74966]: 2.1c scrub starts
Nov 26 12:40:28 compute-0 ceph-mon[74966]: 2.1c scrub ok
Nov 26 12:40:28 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 26 12:40:28 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 26 12:40:28 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 26 12:40:28 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 26 12:40:29 compute-0 ceph-mon[74966]: pgmap v150: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:29 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 26 12:40:29 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 26 12:40:29 compute-0 ceph-mon[74966]: osdmap e74: 3 total, 3 up, 3 in
Nov 26 12:40:29 compute-0 ceph-mon[74966]: 2.7 scrub starts
Nov 26 12:40:29 compute-0 ceph-mon[74966]: 2.7 scrub ok
Nov 26 12:40:29 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=13.283830643s) [1] r=-1 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 37'39 active pruub 146.304275513s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:29 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=13.283709526s) [1] r=-1 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 146.304275513s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:29 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:29 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 26 12:40:29 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 26 12:40:29 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:29 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 26 12:40:29 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 26 12:40:29 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 26 12:40:29 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 26 12:40:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 26 12:40:30 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 26 12:40:30 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 26 12:40:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 26 12:40:30 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 26 12:40:30 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.842657089s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 140.926010132s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:30 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.842606544s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926010132s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:30 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.843101501s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 140.926895142s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:30 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.843066216s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926895142s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:30 compute-0 ceph-mon[74966]: 10.3 scrub starts
Nov 26 12:40:30 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 26 12:40:30 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 26 12:40:30 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=74/75 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:30 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:30 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:40:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 26 12:40:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 26 12:40:30 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 26 12:40:30 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:30 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:30 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:30 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:30 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:30 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:30 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:30 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:31 compute-0 ceph-mon[74966]: 10.3 scrub ok
Nov 26 12:40:31 compute-0 ceph-mon[74966]: pgmap v152: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:31 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 26 12:40:31 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 26 12:40:31 compute-0 ceph-mon[74966]: osdmap e75: 3 total, 3 up, 3 in
Nov 26 12:40:31 compute-0 ceph-mon[74966]: osdmap e76: 3 total, 3 up, 3 in
Nov 26 12:40:31 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 26 12:40:31 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 26 12:40:31 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 26 12:40:31 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 26 12:40:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 26 12:40:31 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 26 12:40:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 26 12:40:31 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 26 12:40:31 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 26 12:40:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 26 12:40:31 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 26 12:40:32 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:32 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:32 compute-0 ceph-mon[74966]: 2.f scrub starts
Nov 26 12:40:32 compute-0 ceph-mon[74966]: 2.f scrub ok
Nov 26 12:40:32 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 26 12:40:32 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 26 12:40:32 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 26 12:40:32 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 26 12:40:32 compute-0 ceph-mon[74966]: osdmap e77: 3 total, 3 up, 3 in
Nov 26 12:40:32 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=14.681272507s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 37'39 active pruub 150.319351196s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:32 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=14.680977821s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 150.319351196s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:32 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:33 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 26 12:40:33 compute-0 ceph-mon[74966]: pgmap v155: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:33 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 26 12:40:33 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 26 12:40:33 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:33 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:33 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:33 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:33 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.946485519s) [2] async=[2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 148.045059204s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:33 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.946434021s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.045059204s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:33 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.944923401s) [2] async=[2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 148.043884277s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:33 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.944789886s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.043884277s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:33 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=77/78 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:33 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 26 12:40:33 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 26 12:40:33 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 26 12:40:34 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 26 12:40:34 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 26 12:40:34 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 26 12:40:34 compute-0 ceph-mon[74966]: osdmap e78: 3 total, 3 up, 3 in
Nov 26 12:40:34 compute-0 ceph-mon[74966]: 5.19 scrub starts
Nov 26 12:40:34 compute-0 ceph-mon[74966]: 5.19 scrub ok
Nov 26 12:40:34 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:34 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:34 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.8 deep-scrub starts
Nov 26 12:40:34 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.8 deep-scrub ok
Nov 26 12:40:35 compute-0 ceph-mon[74966]: pgmap v158: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 26 12:40:35 compute-0 ceph-mon[74966]: osdmap e79: 3 total, 3 up, 3 in
Nov 26 12:40:35 compute-0 ceph-mon[74966]: 4.8 deep-scrub starts
Nov 26 12:40:35 compute-0 ceph-mon[74966]: 4.8 deep-scrub ok
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:40:35
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Some PGs (0.006557) are inactive; try again later
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 3 objects/s recovering
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:40:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:40:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:40:36 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 26 12:40:36 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 26 12:40:36 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Nov 26 12:40:36 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Nov 26 12:40:37 compute-0 ceph-mon[74966]: pgmap v160: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 3 objects/s recovering
Nov 26 12:40:37 compute-0 ceph-mon[74966]: 5.7 scrub starts
Nov 26 12:40:37 compute-0 ceph-mon[74966]: 5.7 scrub ok
Nov 26 12:40:37 compute-0 ceph-mon[74966]: 10.5 deep-scrub starts
Nov 26 12:40:37 compute-0 ceph-mon[74966]: 10.5 deep-scrub ok
Nov 26 12:40:37 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 26 12:40:37 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 26 12:40:37 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.18 deep-scrub starts
Nov 26 12:40:37 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.18 deep-scrub ok
Nov 26 12:40:37 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 2 objects/s recovering
Nov 26 12:40:38 compute-0 ceph-mon[74966]: 4.5 scrub starts
Nov 26 12:40:38 compute-0 ceph-mon[74966]: 4.5 scrub ok
Nov 26 12:40:38 compute-0 ceph-mon[74966]: 2.18 deep-scrub starts
Nov 26 12:40:38 compute-0 ceph-mon[74966]: 2.18 deep-scrub ok
Nov 26 12:40:38 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Nov 26 12:40:38 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Nov 26 12:40:39 compute-0 ceph-mon[74966]: pgmap v161: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 2 objects/s recovering
Nov 26 12:40:39 compute-0 ceph-mon[74966]: 4.7 deep-scrub starts
Nov 26 12:40:39 compute-0 ceph-mon[74966]: 4.7 deep-scrub ok
Nov 26 12:40:39 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 1 objects/s recovering
Nov 26 12:40:39 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 26 12:40:39 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 26 12:40:39 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 26 12:40:39 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 26 12:40:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 26 12:40:40 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 26 12:40:40 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 26 12:40:40 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 26 12:40:40 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 26 12:40:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 26 12:40:40 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 26 12:40:40 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 26 12:40:40 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 26 12:40:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:40:41 compute-0 ceph-mon[74966]: pgmap v162: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 1 objects/s recovering
Nov 26 12:40:41 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 26 12:40:41 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 26 12:40:41 compute-0 ceph-mon[74966]: osdmap e80: 3 total, 3 up, 3 in
Nov 26 12:40:41 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 26 12:40:41 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 26 12:40:41 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 26 12:40:41 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 26 12:40:41 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Nov 26 12:40:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 26 12:40:41 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 26 12:40:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 26 12:40:41 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 26 12:40:42 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 26 12:40:42 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 26 12:40:42 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 26 12:40:42 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 26 12:40:42 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 26 12:40:42 compute-0 ceph-mon[74966]: 10.a scrub starts
Nov 26 12:40:42 compute-0 ceph-mon[74966]: 10.a scrub ok
Nov 26 12:40:42 compute-0 ceph-mon[74966]: 4.2 scrub starts
Nov 26 12:40:42 compute-0 ceph-mon[74966]: 4.2 scrub ok
Nov 26 12:40:42 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 26 12:40:42 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 26 12:40:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81 pruub=8.856573105s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 37'39 active pruub 154.304397583s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:42 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81 pruub=8.856448174s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 154.304397583s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:42 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 26 12:40:43 compute-0 ceph-mon[74966]: 5.4 scrub starts
Nov 26 12:40:43 compute-0 ceph-mon[74966]: 5.4 scrub ok
Nov 26 12:40:43 compute-0 ceph-mon[74966]: pgmap v164: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Nov 26 12:40:43 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 26 12:40:43 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 26 12:40:43 compute-0 ceph-mon[74966]: osdmap e81: 3 total, 3 up, 3 in
Nov 26 12:40:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 26 12:40:43 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 26 12:40:43 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:43 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 26 12:40:43 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 26 12:40:43 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Nov 26 12:40:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 26 12:40:43 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 26 12:40:44 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 26 12:40:44 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 26 12:40:44 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 26 12:40:44 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 26 12:40:44 compute-0 ceph-mon[74966]: osdmap e82: 3 total, 3 up, 3 in
Nov 26 12:40:44 compute-0 ceph-mon[74966]: 2.1d scrub starts
Nov 26 12:40:44 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.359070782053787e-07 of space, bias 4.0, pg target 0.0007630884938464544 quantized to 16 (current 16)
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:40:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:40:45 compute-0 ceph-mon[74966]: 2.1d scrub ok
Nov 26 12:40:45 compute-0 ceph-mon[74966]: pgmap v167: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Nov 26 12:40:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 26 12:40:45 compute-0 ceph-mon[74966]: osdmap e83: 3 total, 3 up, 3 in
Nov 26 12:40:45 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 106 B/s, 0 objects/s recovering
Nov 26 12:40:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 26 12:40:45 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 26 12:40:45 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 26 12:40:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:40:45 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 26 12:40:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 26 12:40:46 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 26 12:40:46 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 26 12:40:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 26 12:40:46 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 26 12:40:47 compute-0 ceph-mon[74966]: pgmap v169: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 106 B/s, 0 objects/s recovering
Nov 26 12:40:47 compute-0 ceph-mon[74966]: 10.c scrub starts
Nov 26 12:40:47 compute-0 ceph-mon[74966]: 10.c scrub ok
Nov 26 12:40:47 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 26 12:40:47 compute-0 ceph-mon[74966]: osdmap e84: 3 total, 3 up, 3 in
Nov 26 12:40:47 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 26 12:40:47 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 26 12:40:47 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 106 B/s, 0 objects/s recovering
Nov 26 12:40:47 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 26 12:40:47 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 26 12:40:47 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 26 12:40:47 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 26 12:40:48 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 26 12:40:48 compute-0 ceph-mon[74966]: 4.4 scrub starts
Nov 26 12:40:48 compute-0 ceph-mon[74966]: 4.4 scrub ok
Nov 26 12:40:48 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 26 12:40:48 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 26 12:40:48 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 26 12:40:48 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 26 12:40:48 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 26 12:40:48 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 26 12:40:49 compute-0 ceph-mon[74966]: pgmap v171: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 106 B/s, 0 objects/s recovering
Nov 26 12:40:49 compute-0 ceph-mon[74966]: 10.18 scrub starts
Nov 26 12:40:49 compute-0 ceph-mon[74966]: 10.18 scrub ok
Nov 26 12:40:49 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 26 12:40:49 compute-0 ceph-mon[74966]: osdmap e85: 3 total, 3 up, 3 in
Nov 26 12:40:49 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 26 12:40:49 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 26 12:40:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 26 12:40:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 26 12:40:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 26 12:40:50 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 26 12:40:50 compute-0 ceph-mon[74966]: 10.1b scrub starts
Nov 26 12:40:50 compute-0 ceph-mon[74966]: 10.1b scrub ok
Nov 26 12:40:50 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 26 12:40:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:40:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86 pruub=15.071921349s) [2] r=-1 lpr=86 pi=[53,86)/1 crt=44'389 mlcod 0'0 active pruub 169.294906616s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:50 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86 pruub=15.071560860s) [2] r=-1 lpr=86 pi=[53,86)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 169.294906616s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:50 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 26 12:40:51 compute-0 ceph-mon[74966]: pgmap v173: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:51 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 26 12:40:51 compute-0 ceph-mon[74966]: osdmap e86: 3 total, 3 up, 3 in
Nov 26 12:40:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 26 12:40:51 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 26 12:40:51 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:51 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:51 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:51 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:51 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 26 12:40:51 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 26 12:40:51 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:52 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 26 12:40:52 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 26 12:40:52 compute-0 ceph-mon[74966]: osdmap e87: 3 total, 3 up, 3 in
Nov 26 12:40:52 compute-0 ceph-mon[74966]: 4.9 scrub starts
Nov 26 12:40:52 compute-0 ceph-mon[74966]: 4.9 scrub ok
Nov 26 12:40:52 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 26 12:40:52 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:52 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 26 12:40:52 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 26 12:40:52 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 26 12:40:52 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 26 12:40:53 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 26 12:40:53 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 26 12:40:53 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 26 12:40:53 compute-0 ceph-mon[74966]: pgmap v176: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:40:53 compute-0 ceph-mon[74966]: osdmap e88: 3 total, 3 up, 3 in
Nov 26 12:40:53 compute-0 ceph-mon[74966]: 4.f scrub starts
Nov 26 12:40:53 compute-0 ceph-mon[74966]: 4.f scrub ok
Nov 26 12:40:53 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89 pruub=14.997830391s) [2] async=[2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 44'389 active pruub 171.500854492s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:53 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89 pruub=14.997679710s) [2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 171.500854492s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:40:53 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:40:53 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:40:53 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 26 12:40:54 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 26 12:40:54 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 26 12:40:54 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 26 12:40:54 compute-0 ceph-mon[74966]: 2.19 scrub starts
Nov 26 12:40:54 compute-0 ceph-mon[74966]: 2.19 scrub ok
Nov 26 12:40:54 compute-0 ceph-mon[74966]: osdmap e89: 3 total, 3 up, 3 in
Nov 26 12:40:54 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:40:55 compute-0 ceph-mon[74966]: pgmap v179: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 26 12:40:55 compute-0 ceph-mon[74966]: osdmap e90: 3 total, 3 up, 3 in
Nov 26 12:40:55 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 26 12:40:55 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 1 objects/s recovering
Nov 26 12:40:55 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 26 12:40:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:40:56 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 26 12:40:56 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 26 12:40:57 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 26 12:40:57 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 26 12:40:57 compute-0 ceph-mon[74966]: 5.1e scrub starts
Nov 26 12:40:57 compute-0 ceph-mon[74966]: pgmap v181: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 1 objects/s recovering
Nov 26 12:40:57 compute-0 ceph-mon[74966]: 5.1e scrub ok
Nov 26 12:40:57 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 26 12:40:57 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 26 12:40:57 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 26 12:40:57 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 26 12:40:57 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 26 12:40:58 compute-0 ceph-mon[74966]: 7.1b scrub starts
Nov 26 12:40:58 compute-0 ceph-mon[74966]: 7.1b scrub ok
Nov 26 12:40:58 compute-0 ceph-mon[74966]: 10.1c scrub starts
Nov 26 12:40:58 compute-0 ceph-mon[74966]: 10.1c scrub ok
Nov 26 12:40:58 compute-0 ceph-mon[74966]: 4.d scrub starts
Nov 26 12:40:58 compute-0 ceph-mon[74966]: 4.d scrub ok
Nov 26 12:40:58 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 26 12:40:58 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 26 12:40:59 compute-0 ceph-mon[74966]: pgmap v182: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 26 12:40:59 compute-0 ceph-mon[74966]: 8.14 scrub starts
Nov 26 12:40:59 compute-0 ceph-mon[74966]: 8.14 scrub ok
Nov 26 12:40:59 compute-0 ceph-mon[74966]: 4.10 scrub starts
Nov 26 12:40:59 compute-0 ceph-mon[74966]: 4.10 scrub ok
Nov 26 12:40:59 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 26 12:40:59 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 26 12:40:59 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 26 12:40:59 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 26 12:40:59 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 26 12:41:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 26 12:41:00 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 26 12:41:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 26 12:41:00 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 26 12:41:00 compute-0 ceph-mon[74966]: 4.12 scrub starts
Nov 26 12:41:00 compute-0 ceph-mon[74966]: 4.12 scrub ok
Nov 26 12:41:00 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 26 12:41:00 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 26 12:41:00 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 26 12:41:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:41:01 compute-0 ceph-mon[74966]: pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 26 12:41:01 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 26 12:41:01 compute-0 ceph-mon[74966]: osdmap e91: 3 total, 3 up, 3 in
Nov 26 12:41:01 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 26 12:41:01 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 26 12:41:01 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 26 12:41:01 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 26 12:41:01 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 26 12:41:01 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 26 12:41:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 26 12:41:02 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 26 12:41:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 26 12:41:02 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 26 12:41:02 compute-0 ceph-mon[74966]: 11.14 scrub starts
Nov 26 12:41:02 compute-0 ceph-mon[74966]: 11.14 scrub ok
Nov 26 12:41:02 compute-0 ceph-mon[74966]: 4.14 scrub starts
Nov 26 12:41:02 compute-0 ceph-mon[74966]: 4.14 scrub ok
Nov 26 12:41:02 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 26 12:41:02 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 26 12:41:02 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 26 12:41:02 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 26 12:41:02 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 26 12:41:03 compute-0 ceph-mon[74966]: pgmap v185: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:03 compute-0 ceph-mon[74966]: 7.18 scrub starts
Nov 26 12:41:03 compute-0 ceph-mon[74966]: 7.18 scrub ok
Nov 26 12:41:03 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 26 12:41:03 compute-0 ceph-mon[74966]: osdmap e92: 3 total, 3 up, 3 in
Nov 26 12:41:03 compute-0 ceph-mon[74966]: 7.7 scrub starts
Nov 26 12:41:03 compute-0 ceph-mon[74966]: 7.7 scrub ok
Nov 26 12:41:03 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 26 12:41:03 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 26 12:41:04 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 26 12:41:04 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 26 12:41:04 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 26 12:41:04 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 26 12:41:04 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.804219246s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 active pruub 172.308624268s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:04 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.803787231s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 172.308624268s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:04 compute-0 ceph-mon[74966]: 8.10 scrub starts
Nov 26 12:41:04 compute-0 ceph-mon[74966]: 8.10 scrub ok
Nov 26 12:41:04 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 26 12:41:04 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:04 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92 pruub=10.754682541s) [1] r=-1 lpr=92 pi=[54,92)/1 crt=44'389 mlcod 0'0 active pruub 178.302368164s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:04 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92 pruub=10.754209518s) [1] r=-1 lpr=92 pi=[54,92)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.302368164s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:04 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 26 12:41:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 26 12:41:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 26 12:41:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 26 12:41:05 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 26 12:41:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:05 compute-0 ceph-mon[74966]: pgmap v187: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:05 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 26 12:41:05 compute-0 ceph-mon[74966]: osdmap e93: 3 total, 3 up, 3 in
Nov 26 12:41:05 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:05 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:05 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:05 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 26 12:41:05 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 26 12:41:05 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 26 12:41:05 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 26 12:41:05 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:41:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:41:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:41:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:41:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:41:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:41:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:41:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 26 12:41:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 26 12:41:06 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 26 12:41:06 compute-0 ceph-mon[74966]: 10.1d scrub starts
Nov 26 12:41:06 compute-0 ceph-mon[74966]: 10.1d scrub ok
Nov 26 12:41:06 compute-0 ceph-mon[74966]: osdmap e94: 3 total, 3 up, 3 in
Nov 26 12:41:06 compute-0 ceph-mon[74966]: 7.b scrub starts
Nov 26 12:41:06 compute-0 ceph-mon[74966]: 7.b scrub ok
Nov 26 12:41:06 compute-0 ceph-mon[74966]: 11.1 scrub starts
Nov 26 12:41:06 compute-0 ceph-mon[74966]: 11.1 scrub ok
Nov 26 12:41:06 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:41:06 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:41:07 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 26 12:41:07 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 26 12:41:07 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 26 12:41:07 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 26 12:41:07 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996132851s) [0] async=[0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 44'389 active pruub 178.522598267s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:07 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996060371s) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.522598267s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:07 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 26 12:41:07 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:07 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:07 compute-0 ceph-mon[74966]: pgmap v190: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:07 compute-0 ceph-mon[74966]: osdmap e95: 3 total, 3 up, 3 in
Nov 26 12:41:07 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=15.074978828s) [1] async=[1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 44'389 active pruub 185.642929077s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:07 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=15.073821068s) [1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.642929077s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:07 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:07 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:07 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.d deep-scrub starts
Nov 26 12:41:07 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.d deep-scrub ok
Nov 26 12:41:07 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 26 12:41:07 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 26 12:41:07 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 26 12:41:08 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 26 12:41:08 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 26 12:41:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 26 12:41:08 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 26 12:41:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 26 12:41:08 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 26 12:41:08 compute-0 ceph-mon[74966]: 10.1f scrub starts
Nov 26 12:41:08 compute-0 ceph-mon[74966]: 10.1f scrub ok
Nov 26 12:41:08 compute-0 ceph-mon[74966]: osdmap e96: 3 total, 3 up, 3 in
Nov 26 12:41:08 compute-0 ceph-mon[74966]: 7.d deep-scrub starts
Nov 26 12:41:08 compute-0 ceph-mon[74966]: 7.d deep-scrub ok
Nov 26 12:41:08 compute-0 ceph-mon[74966]: pgmap v193: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 26 12:41:08 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 26 12:41:08 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:41:08 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:41:08 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 26 12:41:08 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 26 12:41:09 compute-0 ceph-mon[74966]: 7.1a scrub starts
Nov 26 12:41:09 compute-0 ceph-mon[74966]: 7.1a scrub ok
Nov 26 12:41:09 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 26 12:41:09 compute-0 ceph-mon[74966]: osdmap e97: 3 total, 3 up, 3 in
Nov 26 12:41:09 compute-0 ceph-mon[74966]: 7.1f scrub starts
Nov 26 12:41:09 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 7 op/s; 47 B/s, 1 objects/s recovering
Nov 26 12:41:09 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 26 12:41:09 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 26 12:41:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 26 12:41:10 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 26 12:41:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 26 12:41:10 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 26 12:41:10 compute-0 ceph-mon[74966]: 7.1f scrub ok
Nov 26 12:41:10 compute-0 ceph-mon[74966]: pgmap v195: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 7 op/s; 47 B/s, 1 objects/s recovering
Nov 26 12:41:10 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 26 12:41:10 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 26 12:41:10 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 26 12:41:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:41:11 compute-0 sudo[105763]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:11 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 26 12:41:11 compute-0 ceph-mon[74966]: osdmap e98: 3 total, 3 up, 3 in
Nov 26 12:41:11 compute-0 ceph-mon[74966]: 11.f scrub starts
Nov 26 12:41:11 compute-0 ceph-mon[74966]: 11.f scrub ok
Nov 26 12:41:11 compute-0 sudo[106065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iymacelmqjgcnslnayuwyokspqjitfjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160871.3507566-137-279066138702867/AnsiballZ_command.py'
Nov 26 12:41:11 compute-0 sudo[106065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:11 compute-0 python3.9[106067]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:41:11 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 26 12:41:11 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 26 12:41:11 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 6 op/s; 59 B/s, 2 objects/s recovering
Nov 26 12:41:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 26 12:41:11 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 26 12:41:11 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts
Nov 26 12:41:11 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.c deep-scrub ok
Nov 26 12:41:12 compute-0 sudo[106065]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:12 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 26 12:41:12 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 26 12:41:12 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 26 12:41:12 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99 pruub=10.713154793s) [2] r=-1 lpr=99 pi=[54,99)/1 crt=44'389 mlcod 0'0 active pruub 186.304855347s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:12 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99 pruub=10.712953568s) [2] r=-1 lpr=99 pi=[54,99)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.304855347s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:12 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 26 12:41:12 compute-0 ceph-mon[74966]: 8.c scrub starts
Nov 26 12:41:12 compute-0 ceph-mon[74966]: 8.c scrub ok
Nov 26 12:41:12 compute-0 ceph-mon[74966]: pgmap v197: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 6 op/s; 59 B/s, 2 objects/s recovering
Nov 26 12:41:12 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 26 12:41:12 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:12 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 26 12:41:12 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 26 12:41:12 compute-0 sudo[106352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyiajvfholhnajbcnckqugfozsyrbdvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160872.3275936-145-161842760692/AnsiballZ_selinux.py'
Nov 26 12:41:12 compute-0 sudo[106352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:13 compute-0 python3.9[106354]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 26 12:41:13 compute-0 sudo[106352]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:13 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 26 12:41:13 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 26 12:41:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:13 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 26 12:41:13 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:13 compute-0 ceph-mon[74966]: 7.c deep-scrub starts
Nov 26 12:41:13 compute-0 ceph-mon[74966]: 7.c deep-scrub ok
Nov 26 12:41:13 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 26 12:41:13 compute-0 ceph-mon[74966]: osdmap e99: 3 total, 3 up, 3 in
Nov 26 12:41:13 compute-0 ceph-mon[74966]: 11.e scrub starts
Nov 26 12:41:13 compute-0 ceph-mon[74966]: 11.e scrub ok
Nov 26 12:41:13 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:13 compute-0 sudo[106504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiwvtuptmupekfdwbibwncnnnghrvhve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160873.259993-156-60311502739920/AnsiballZ_command.py'
Nov 26 12:41:13 compute-0 sudo[106504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:13 compute-0 python3.9[106506]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 26 12:41:13 compute-0 sudo[106504]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 26 12:41:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 26 12:41:13 compute-0 sudo[106656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prikqomcqtzhrmxllsotnkescdsaswwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160873.6959248-164-241649027881910/AnsiballZ_file.py'
Nov 26 12:41:13 compute-0 sudo[106656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:13 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 305 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Nov 26 12:41:13 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 26 12:41:13 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 26 12:41:14 compute-0 python3.9[106658]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:41:14 compute-0 sudo[106656]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 26 12:41:14 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 26 12:41:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 26 12:41:14 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 26 12:41:14 compute-0 ceph-mon[74966]: osdmap e100: 3 total, 3 up, 3 in
Nov 26 12:41:14 compute-0 ceph-mon[74966]: 8.e scrub starts
Nov 26 12:41:14 compute-0 ceph-mon[74966]: pgmap v200: 305 pgs: 305 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Nov 26 12:41:14 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 26 12:41:14 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 26 12:41:14 compute-0 ceph-mon[74966]: osdmap e101: 3 total, 3 up, 3 in
Nov 26 12:41:14 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:41:14 compute-0 sudo[106808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxmmistaeyjlieykbnoidywgunpdbcvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160874.1453838-172-194977793772356/AnsiballZ_mount.py'
Nov 26 12:41:14 compute-0 sudo[106808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:14 compute-0 python3.9[106810]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 26 12:41:14 compute-0 sudo[106808]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:14 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 26 12:41:14 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 26 12:41:15 compute-0 sudo[106960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehjdqevjknfhwopzcimhyvybirmehqgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160875.1508324-200-261072444488067/AnsiballZ_file.py'
Nov 26 12:41:15 compute-0 sudo[106960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 26 12:41:15 compute-0 ceph-mon[74966]: 8.e scrub ok
Nov 26 12:41:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 26 12:41:15 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 26 12:41:15 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102 pruub=14.987218857s) [2] async=[2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 44'389 active pruub 193.604431152s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:15 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102 pruub=14.987159729s) [2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 193.604431152s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:15 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:15 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:15 compute-0 python3.9[106962]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:41:15 compute-0 sudo[106960]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:15 compute-0 sudo[107112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umhajfxexeykavqzuliiauoncuqhiren ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160875.5748363-208-224362381598880/AnsiballZ_stat.py'
Nov 26 12:41:15 compute-0 sudo[107112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:15 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 26 12:41:15 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 26 12:41:15 compute-0 python3.9[107114]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:41:15 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 26 12:41:15 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 26 12:41:15 compute-0 sudo[107112]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:41:16 compute-0 sudo[107190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyhehxzxxdydgqdnkcbvlactrebalfee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160875.5748363-208-224362381598880/AnsiballZ_file.py'
Nov 26 12:41:16 compute-0 sudo[107190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:16 compute-0 python3.9[107192]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:41:16 compute-0 sudo[107193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:41:16 compute-0 sudo[107193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:16 compute-0 sudo[107193]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:16 compute-0 sudo[107190]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:16 compute-0 sudo[107218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:41:16 compute-0 sudo[107218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:16 compute-0 sudo[107218]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:16 compute-0 sudo[107266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:41:16 compute-0 sudo[107266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:16 compute-0 sudo[107266]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:16 compute-0 sudo[107292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:41:16 compute-0 sudo[107292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 26 12:41:16 compute-0 ceph-mon[74966]: 8.15 scrub starts
Nov 26 12:41:16 compute-0 ceph-mon[74966]: 8.15 scrub ok
Nov 26 12:41:16 compute-0 ceph-mon[74966]: osdmap e102: 3 total, 3 up, 3 in
Nov 26 12:41:16 compute-0 ceph-mon[74966]: 11.17 scrub starts
Nov 26 12:41:16 compute-0 ceph-mon[74966]: pgmap v203: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:16 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 26 12:41:16 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 26 12:41:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 26 12:41:16 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 26 12:41:16 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=102/103 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:41:16 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 26 12:41:16 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 26 12:41:16 compute-0 sudo[107292]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:41:16 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:41:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:41:16 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:41:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:41:16 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:41:16 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev e2c9ff1b-d3e7-4413-8903-771e192fb86f does not exist
Nov 26 12:41:16 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 29128d62-e193-4e59-9485-79ad11543262 does not exist
Nov 26 12:41:16 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev cb9697a3-f981-4b9d-aaa9-ace88bb0505f does not exist
Nov 26 12:41:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:41:16 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:41:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:41:16 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:41:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:41:16 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:41:16 compute-0 sudo[107445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:41:16 compute-0 sudo[107445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:16 compute-0 sudo[107445]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:16 compute-0 sudo[107494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfwoheeohoqneidsdcompjabglhdwgst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160876.5296407-229-10642672488333/AnsiballZ_stat.py'
Nov 26 12:41:16 compute-0 sudo[107494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:16 compute-0 sudo[107499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:41:16 compute-0 sudo[107499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:16 compute-0 sudo[107499]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:16 compute-0 sudo[107524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:41:16 compute-0 sudo[107524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:16 compute-0 sudo[107524]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:16 compute-0 sudo[107549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:41:16 compute-0 sudo[107549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:16 compute-0 python3.9[107498]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:41:16 compute-0 sudo[107494]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:17 compute-0 podman[107632]: 2025-11-26 12:41:17.054450759 +0000 UTC m=+0.026248755 container create 19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:41:17 compute-0 systemd[1]: Started libpod-conmon-19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a.scope.
Nov 26 12:41:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:41:17 compute-0 podman[107632]: 2025-11-26 12:41:17.107487871 +0000 UTC m=+0.079285877 container init 19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 12:41:17 compute-0 podman[107632]: 2025-11-26 12:41:17.112854284 +0000 UTC m=+0.084652270 container start 19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:41:17 compute-0 podman[107632]: 2025-11-26 12:41:17.114808556 +0000 UTC m=+0.086606542 container attach 19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:41:17 compute-0 fervent_northcutt[107646]: 167 167
Nov 26 12:41:17 compute-0 systemd[1]: libpod-19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a.scope: Deactivated successfully.
Nov 26 12:41:17 compute-0 podman[107632]: 2025-11-26 12:41:17.116590634 +0000 UTC m=+0.088388620 container died 19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 12:41:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c614989a51f8bb2acb31cc66200842ab564217f77878addca62143bd29b1a0c-merged.mount: Deactivated successfully.
Nov 26 12:41:17 compute-0 podman[107632]: 2025-11-26 12:41:17.134671636 +0000 UTC m=+0.106469622 container remove 19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 12:41:17 compute-0 podman[107632]: 2025-11-26 12:41:17.043658121 +0000 UTC m=+0.015456127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:41:17 compute-0 systemd[1]: libpod-conmon-19a21873e29f9ee6cd26a6a7ccbc74044092d0c9997fbe17f0c075b76c7d472a.scope: Deactivated successfully.
Nov 26 12:41:17 compute-0 podman[107692]: 2025-11-26 12:41:17.24610548 +0000 UTC m=+0.026290862 container create e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 12:41:17 compute-0 systemd[1]: Started libpod-conmon-e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a.scope.
Nov 26 12:41:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f15432148fd04c5ebe4b597324599925c14f3790adeec5a2ed834ceb37c023/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f15432148fd04c5ebe4b597324599925c14f3790adeec5a2ed834ceb37c023/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f15432148fd04c5ebe4b597324599925c14f3790adeec5a2ed834ceb37c023/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f15432148fd04c5ebe4b597324599925c14f3790adeec5a2ed834ceb37c023/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f15432148fd04c5ebe4b597324599925c14f3790adeec5a2ed834ceb37c023/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:41:17 compute-0 podman[107692]: 2025-11-26 12:41:17.305073439 +0000 UTC m=+0.085258841 container init e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_haibt, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 12:41:17 compute-0 podman[107692]: 2025-11-26 12:41:17.309576104 +0000 UTC m=+0.089761486 container start e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 12:41:17 compute-0 podman[107692]: 2025-11-26 12:41:17.310803135 +0000 UTC m=+0.090988518 container attach e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_haibt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 12:41:17 compute-0 podman[107692]: 2025-11-26 12:41:17.234714355 +0000 UTC m=+0.014899758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:41:17 compute-0 ceph-mon[74966]: 11.17 scrub ok
Nov 26 12:41:17 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 26 12:41:17 compute-0 ceph-mon[74966]: osdmap e103: 3 total, 3 up, 3 in
Nov 26 12:41:17 compute-0 ceph-mon[74966]: 7.10 scrub starts
Nov 26 12:41:17 compute-0 ceph-mon[74966]: 7.10 scrub ok
Nov 26 12:41:17 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:41:17 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:41:17 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:41:17 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:41:17 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:41:17 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:41:17 compute-0 sudo[107812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqcjhlldhzcukpskodyuhwcqyaajxkwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160877.2038321-242-200154721373248/AnsiballZ_getent.py'
Nov 26 12:41:17 compute-0 sudo[107812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:17 compute-0 python3.9[107814]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 26 12:41:17 compute-0 sudo[107812]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:17 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:17 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 26 12:41:17 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 26 12:41:17 compute-0 sudo[107975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxxohtmxaqtvnsnxfpookfekniofhqvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160877.8085368-252-35401745953063/AnsiballZ_getent.py'
Nov 26 12:41:17 compute-0 sudo[107975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:18 compute-0 python3.9[107978]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 26 12:41:18 compute-0 modest_haibt[107734]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:41:18 compute-0 modest_haibt[107734]: --> relative data size: 1.0
Nov 26 12:41:18 compute-0 modest_haibt[107734]: --> All data devices are unavailable
Nov 26 12:41:18 compute-0 sudo[107975]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:18 compute-0 systemd[1]: libpod-e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a.scope: Deactivated successfully.
Nov 26 12:41:18 compute-0 podman[107692]: 2025-11-26 12:41:18.157646178 +0000 UTC m=+0.937831570 container died e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_haibt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:41:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-50f15432148fd04c5ebe4b597324599925c14f3790adeec5a2ed834ceb37c023-merged.mount: Deactivated successfully.
Nov 26 12:41:18 compute-0 podman[107692]: 2025-11-26 12:41:18.195613252 +0000 UTC m=+0.975798634 container remove e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_haibt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 12:41:18 compute-0 systemd[1]: libpod-conmon-e0f0789b903427c9fe08088d6b80a9a1caf94d2d1cf0467d137020e2d72a134a.scope: Deactivated successfully.
Nov 26 12:41:18 compute-0 sudo[107549]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:18 compute-0 sudo[108027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:41:18 compute-0 sudo[108027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:18 compute-0 sudo[108027]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:18 compute-0 sudo[108075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:41:18 compute-0 sudo[108075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:18 compute-0 sudo[108075]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:18 compute-0 sudo[108129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:41:18 compute-0 sudo[108129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:18 compute-0 sudo[108129]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 26 12:41:18 compute-0 ceph-mon[74966]: pgmap v205: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:18 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 26 12:41:18 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 26 12:41:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 26 12:41:18 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 26 12:41:18 compute-0 sudo[108154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:41:18 compute-0 sudo[108154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:18 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659957886s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 active pruub 186.385116577s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:18 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659915924s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.385116577s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:18 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:18 compute-0 sudo[108278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rupafwqrgvwsznkottkkddaxjfgoydcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160878.2701285-260-171143861088210/AnsiballZ_group.py'
Nov 26 12:41:18 compute-0 sudo[108278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:18 compute-0 podman[108286]: 2025-11-26 12:41:18.626020106 +0000 UTC m=+0.032523038 container create 23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curie, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 12:41:18 compute-0 systemd[1]: Started libpod-conmon-23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1.scope.
Nov 26 12:41:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:41:18 compute-0 podman[108286]: 2025-11-26 12:41:18.673367027 +0000 UTC m=+0.079869978 container init 23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curie, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:41:18 compute-0 podman[108286]: 2025-11-26 12:41:18.678815503 +0000 UTC m=+0.085318435 container start 23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curie, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:41:18 compute-0 podman[108286]: 2025-11-26 12:41:18.680064417 +0000 UTC m=+0.086567348 container attach 23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 12:41:18 compute-0 naughty_curie[108299]: 167 167
Nov 26 12:41:18 compute-0 systemd[1]: libpod-23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1.scope: Deactivated successfully.
Nov 26 12:41:18 compute-0 podman[108286]: 2025-11-26 12:41:18.682957018 +0000 UTC m=+0.089459949 container died 23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curie, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:41:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f4944a1844b2c551686814df7bb754851a80e8ba42b2672e9a87b4c2a92b18c-merged.mount: Deactivated successfully.
Nov 26 12:41:18 compute-0 podman[108286]: 2025-11-26 12:41:18.702270242 +0000 UTC m=+0.108773173 container remove 23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:41:18 compute-0 podman[108286]: 2025-11-26 12:41:18.613130056 +0000 UTC m=+0.019632988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:41:18 compute-0 systemd[1]: libpod-conmon-23d401442ae3f0153df2411bbd7b00faffb3011bf445544be7392fe72102f9c1.scope: Deactivated successfully.
Nov 26 12:41:18 compute-0 python3.9[108285]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 12:41:18 compute-0 sudo[108278]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:18 compute-0 podman[108336]: 2025-11-26 12:41:18.82150706 +0000 UTC m=+0.029598696 container create 654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 12:41:18 compute-0 systemd[1]: Started libpod-conmon-654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525.scope.
Nov 26 12:41:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2593a2f25bb4f82233cc1d0baa71197adf70eed40ebd25ef32fc493d5e043c51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2593a2f25bb4f82233cc1d0baa71197adf70eed40ebd25ef32fc493d5e043c51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2593a2f25bb4f82233cc1d0baa71197adf70eed40ebd25ef32fc493d5e043c51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2593a2f25bb4f82233cc1d0baa71197adf70eed40ebd25ef32fc493d5e043c51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:41:18 compute-0 podman[108336]: 2025-11-26 12:41:18.880165836 +0000 UTC m=+0.088257483 container init 654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:41:18 compute-0 podman[108336]: 2025-11-26 12:41:18.885820782 +0000 UTC m=+0.093912418 container start 654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:41:18 compute-0 podman[108336]: 2025-11-26 12:41:18.887063984 +0000 UTC m=+0.095155641 container attach 654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:41:18 compute-0 podman[108336]: 2025-11-26 12:41:18.808697723 +0000 UTC m=+0.016789379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:41:18 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 26 12:41:19 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 26 12:41:19 compute-0 sudo[108489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tubivjwsubckjgjdnqfukkzoiwdadvzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160878.8981488-269-63953575078644/AnsiballZ_file.py'
Nov 26 12:41:19 compute-0 sudo[108489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:19 compute-0 python3.9[108491]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 26 12:41:19 compute-0 sudo[108489]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:19 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 26 12:41:19 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 26 12:41:19 compute-0 ceph-mon[74966]: osdmap e104: 3 total, 3 up, 3 in
Nov 26 12:41:19 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 26 12:41:19 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 26 12:41:19 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:19 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:19 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:19 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:19 compute-0 hopeful_cray[108359]: {
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:     "0": [
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:         {
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "devices": [
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "/dev/loop3"
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             ],
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "lv_name": "ceph_lv0",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "lv_size": "21470642176",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "name": "ceph_lv0",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "tags": {
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.cluster_name": "ceph",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.crush_device_class": "",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.encrypted": "0",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.osd_id": "0",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.type": "block",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.vdo": "0"
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             },
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "type": "block",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "vg_name": "ceph_vg0"
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:         }
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:     ],
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:     "1": [
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:         {
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "devices": [
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "/dev/loop4"
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             ],
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "lv_name": "ceph_lv1",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "lv_size": "21470642176",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "name": "ceph_lv1",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "tags": {
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.cluster_name": "ceph",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.crush_device_class": "",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.encrypted": "0",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.osd_id": "1",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.type": "block",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.vdo": "0"
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             },
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "type": "block",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "vg_name": "ceph_vg1"
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:         }
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:     ],
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:     "2": [
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:         {
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "devices": [
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "/dev/loop5"
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             ],
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "lv_name": "ceph_lv2",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "lv_size": "21470642176",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "name": "ceph_lv2",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "tags": {
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.cluster_name": "ceph",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.crush_device_class": "",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.encrypted": "0",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.osd_id": "2",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.type": "block",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:                 "ceph.vdo": "0"
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             },
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "type": "block",
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:             "vg_name": "ceph_vg2"
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:         }
Nov 26 12:41:19 compute-0 hopeful_cray[108359]:     ]
Nov 26 12:41:19 compute-0 hopeful_cray[108359]: }
Nov 26 12:41:19 compute-0 systemd[1]: libpod-654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525.scope: Deactivated successfully.
Nov 26 12:41:19 compute-0 podman[108336]: 2025-11-26 12:41:19.527618609 +0000 UTC m=+0.735710244 container died 654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 12:41:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2593a2f25bb4f82233cc1d0baa71197adf70eed40ebd25ef32fc493d5e043c51-merged.mount: Deactivated successfully.
Nov 26 12:41:19 compute-0 podman[108336]: 2025-11-26 12:41:19.561322881 +0000 UTC m=+0.769414517 container remove 654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 12:41:19 compute-0 systemd[1]: libpod-conmon-654571fcb0121c37f841357dcd55418cdc664e1d010677098674d765faac9525.scope: Deactivated successfully.
Nov 26 12:41:19 compute-0 sudo[108154]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:19 compute-0 sudo[108656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndhoijmakjxncgxvyasvwoyefiogeicl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160879.43628-280-13484806350858/AnsiballZ_dnf.py'
Nov 26 12:41:19 compute-0 sudo[108656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:19 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 26 12:41:19 compute-0 sudo[108655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:41:19 compute-0 sudo[108655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:19 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 26 12:41:19 compute-0 sudo[108655]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:19 compute-0 sudo[108683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:41:19 compute-0 sudo[108683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:19 compute-0 sudo[108683]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:19 compute-0 sudo[108708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:41:19 compute-0 sudo[108708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:19 compute-0 sudo[108708]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:19 compute-0 sudo[108733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:41:19 compute-0 sudo[108733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:19 compute-0 python3.9[108672]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:41:19 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 1 objects/s recovering
Nov 26 12:41:19 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 26 12:41:19 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 26 12:41:19 compute-0 podman[108790]: 2025-11-26 12:41:19.977166893 +0000 UTC m=+0.026906913 container create 64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:41:20 compute-0 systemd[1]: Started libpod-conmon-64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95.scope.
Nov 26 12:41:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:41:20 compute-0 podman[108790]: 2025-11-26 12:41:20.037530442 +0000 UTC m=+0.087270462 container init 64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:41:20 compute-0 podman[108790]: 2025-11-26 12:41:20.042912483 +0000 UTC m=+0.092652504 container start 64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:41:20 compute-0 podman[108790]: 2025-11-26 12:41:20.044695522 +0000 UTC m=+0.094435563 container attach 64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:41:20 compute-0 musing_jones[108804]: 167 167
Nov 26 12:41:20 compute-0 systemd[1]: libpod-64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95.scope: Deactivated successfully.
Nov 26 12:41:20 compute-0 podman[108790]: 2025-11-26 12:41:20.04684836 +0000 UTC m=+0.096588380 container died 64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:41:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d2351c2551ad19a0050c83b4f1862446f8838f4c74b7608d75e9c71dd125d4b-merged.mount: Deactivated successfully.
Nov 26 12:41:20 compute-0 podman[108790]: 2025-11-26 12:41:19.966258718 +0000 UTC m=+0.015998759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:41:20 compute-0 podman[108790]: 2025-11-26 12:41:20.064818452 +0000 UTC m=+0.114558474 container remove 64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 12:41:20 compute-0 systemd[1]: libpod-conmon-64a4cba0a6cda44f0afc155dc698a5b1d5dcee3b53a7a59b99666424eea2ed95.scope: Deactivated successfully.
Nov 26 12:41:20 compute-0 podman[108826]: 2025-11-26 12:41:20.177664008 +0000 UTC m=+0.027491576 container create 1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wescoff, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:41:20 compute-0 systemd[1]: Started libpod-conmon-1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598.scope.
Nov 26 12:41:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc1c2aaa9646df43c285a3a38ec390195e8f01bae8d926ab43951db51c6a02c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc1c2aaa9646df43c285a3a38ec390195e8f01bae8d926ab43951db51c6a02c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc1c2aaa9646df43c285a3a38ec390195e8f01bae8d926ab43951db51c6a02c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc1c2aaa9646df43c285a3a38ec390195e8f01bae8d926ab43951db51c6a02c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:41:20 compute-0 podman[108826]: 2025-11-26 12:41:20.234216685 +0000 UTC m=+0.084044263 container init 1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wescoff, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:41:20 compute-0 podman[108826]: 2025-11-26 12:41:20.251099849 +0000 UTC m=+0.100927417 container start 1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wescoff, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 12:41:20 compute-0 podman[108826]: 2025-11-26 12:41:20.252343782 +0000 UTC m=+0.102171351 container attach 1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 12:41:20 compute-0 podman[108826]: 2025-11-26 12:41:20.16640985 +0000 UTC m=+0.016237439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:41:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 26 12:41:20 compute-0 ceph-mon[74966]: 8.2 scrub starts
Nov 26 12:41:20 compute-0 ceph-mon[74966]: 8.2 scrub ok
Nov 26 12:41:20 compute-0 ceph-mon[74966]: osdmap e105: 3 total, 3 up, 3 in
Nov 26 12:41:20 compute-0 ceph-mon[74966]: 7.12 scrub starts
Nov 26 12:41:20 compute-0 ceph-mon[74966]: 7.12 scrub ok
Nov 26 12:41:20 compute-0 ceph-mon[74966]: pgmap v208: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 1 objects/s recovering
Nov 26 12:41:20 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 26 12:41:20 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 26 12:41:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 26 12:41:20 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 26 12:41:20 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:41:20 compute-0 sudo[108656]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:41:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 26 12:41:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 26 12:41:20 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:20 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:20 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 26 12:41:20 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438099861s) [0] async=[0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 44'389 active pruub 192.607360840s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:20 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438027382s) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 192.607360840s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]: {
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:         "osd_id": 1,
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:         "type": "bluestore"
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:     },
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:         "osd_id": 2,
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:         "type": "bluestore"
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:     },
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:         "osd_id": 0,
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:         "type": "bluestore"
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]:     }
Nov 26 12:41:20 compute-0 inspiring_wescoff[108839]: }
Nov 26 12:41:21 compute-0 systemd[1]: libpod-1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598.scope: Deactivated successfully.
Nov 26 12:41:21 compute-0 podman[108826]: 2025-11-26 12:41:21.013569849 +0000 UTC m=+0.863397417 container died 1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wescoff, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 12:41:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-acc1c2aaa9646df43c285a3a38ec390195e8f01bae8d926ab43951db51c6a02c-merged.mount: Deactivated successfully.
Nov 26 12:41:21 compute-0 podman[108826]: 2025-11-26 12:41:21.044108907 +0000 UTC m=+0.893936475 container remove 1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 12:41:21 compute-0 systemd[1]: libpod-conmon-1f496a0f11caa64d697edc226b55ed67554a3f329a370c90fc3c79d1e985b598.scope: Deactivated successfully.
Nov 26 12:41:21 compute-0 sudo[108733]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:41:21 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:41:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:41:21 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:41:21 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev fd71e5db-2569-4a61-8491-87400ce4a034 does not exist
Nov 26 12:41:21 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 5024ec4c-c025-4e48-aff8-f712fb69df67 does not exist
Nov 26 12:41:21 compute-0 sudo[109053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eawkfthjppnhmfsmageipwlbmdieifes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160880.9250107-288-170230359675319/AnsiballZ_file.py'
Nov 26 12:41:21 compute-0 sudo[109012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:41:21 compute-0 sudo[109012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:21 compute-0 sudo[109053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:21 compute-0 sudo[109012]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:21 compute-0 sudo[109059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:41:21 compute-0 sudo[109059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:41:21 compute-0 sudo[109059]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:21 compute-0 python3.9[109058]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:41:21 compute-0 sudo[109053]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 26 12:41:21 compute-0 ceph-mon[74966]: osdmap e106: 3 total, 3 up, 3 in
Nov 26 12:41:21 compute-0 ceph-mon[74966]: osdmap e107: 3 total, 3 up, 3 in
Nov 26 12:41:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:41:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:41:21 compute-0 sudo[109233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odbeorwmgcmkdwhrxmbpxpkoveiqlmyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160881.4016483-296-193064201817477/AnsiballZ_stat.py'
Nov 26 12:41:21 compute-0 sudo[109233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:21 compute-0 python3.9[109235]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:41:21 compute-0 sudo[109233]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:21 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 3 objects/s recovering
Nov 26 12:41:21 compute-0 sudo[109311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drcikxbetzhqunrspbkibmytrbyycpgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160881.4016483-296-193064201817477/AnsiballZ_file.py'
Nov 26 12:41:21 compute-0 sudo[109311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 26 12:41:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 26 12:41:21 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 26 12:41:21 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=107/108 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:41:22 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 26 12:41:22 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 26 12:41:22 compute-0 python3.9[109313]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:41:22 compute-0 sudo[109311]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:22 compute-0 sudo[109463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzsafqirvpxxdbqcpcpfhpmfyeztwrwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160882.1889648-309-17548998608715/AnsiballZ_stat.py'
Nov 26 12:41:22 compute-0 sudo[109463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:22 compute-0 python3.9[109465]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:41:22 compute-0 sudo[109463]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:22 compute-0 sudo[109541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpevixxlbvbkjgwdeeoimafktoxsizlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160882.1889648-309-17548998608715/AnsiballZ_file.py'
Nov 26 12:41:22 compute-0 sudo[109541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:22 compute-0 python3.9[109543]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:41:22 compute-0 sudo[109541]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:22 compute-0 ceph-mon[74966]: pgmap v211: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 3 objects/s recovering
Nov 26 12:41:22 compute-0 ceph-mon[74966]: osdmap e108: 3 total, 3 up, 3 in
Nov 26 12:41:22 compute-0 ceph-mon[74966]: 11.d scrub starts
Nov 26 12:41:22 compute-0 ceph-mon[74966]: 11.d scrub ok
Nov 26 12:41:23 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 26 12:41:23 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 26 12:41:23 compute-0 sudo[109693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulihkmxecendeucnyojvqolwicssrjum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160883.0454698-324-200576390751744/AnsiballZ_dnf.py'
Nov 26 12:41:23 compute-0 sudo[109693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:23 compute-0 python3.9[109695]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:41:23 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 26 12:41:23 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 26 12:41:23 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Nov 26 12:41:23 compute-0 ceph-mon[74966]: 7.2 scrub starts
Nov 26 12:41:23 compute-0 ceph-mon[74966]: 7.2 scrub ok
Nov 26 12:41:23 compute-0 ceph-mon[74966]: 8.f scrub starts
Nov 26 12:41:23 compute-0 ceph-mon[74966]: 8.f scrub ok
Nov 26 12:41:24 compute-0 sudo[109693]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:24 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 26 12:41:24 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 26 12:41:24 compute-0 python3.9[109846]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:41:24 compute-0 ceph-mon[74966]: pgmap v213: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Nov 26 12:41:25 compute-0 python3.9[109998]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 26 12:41:25 compute-0 python3.9[110148]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:41:25 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 26 12:41:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:41:25 compute-0 ceph-mon[74966]: 7.14 scrub starts
Nov 26 12:41:25 compute-0 ceph-mon[74966]: 7.14 scrub ok
Nov 26 12:41:26 compute-0 sudo[110298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtocvucgfzgajwvcauygnhtdrwggecii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160886.1222217-365-217625720138775/AnsiballZ_systemd.py'
Nov 26 12:41:26 compute-0 sudo[110298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:26 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 26 12:41:26 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 26 12:41:26 compute-0 python3.9[110300]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:41:26 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 26 12:41:26 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 26 12:41:26 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 26 12:41:26 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 26 12:41:26 compute-0 ceph-mon[74966]: pgmap v214: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 26 12:41:26 compute-0 ceph-mon[74966]: 8.b scrub starts
Nov 26 12:41:26 compute-0 ceph-mon[74966]: 8.b scrub ok
Nov 26 12:41:27 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 26 12:41:27 compute-0 sudo[110298]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:27 compute-0 python3.9[110462]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 26 12:41:27 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2 B/s, 0 objects/s recovering
Nov 26 12:41:27 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 26 12:41:27 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 26 12:41:27 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 26 12:41:27 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 26 12:41:27 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 26 12:41:27 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 26 12:41:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 26 12:41:28 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.622039795s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 active pruub 196.307983398s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:28 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.621926308s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 196.307983398s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:28 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:28 compute-0 sudo[110612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbkykjujtpqzxacghqwcqcinnomcqgly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160888.6834834-422-32761908834405/AnsiballZ_systemd.py'
Nov 26 12:41:28 compute-0 sudo[110612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:28 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 26 12:41:28 compute-0 ceph-mon[74966]: pgmap v215: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2 B/s, 0 objects/s recovering
Nov 26 12:41:28 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 26 12:41:28 compute-0 ceph-mon[74966]: osdmap e109: 3 total, 3 up, 3 in
Nov 26 12:41:28 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 26 12:41:28 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 26 12:41:28 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:28 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:28 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:28 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:29 compute-0 python3.9[110614]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:41:29 compute-0 sudo[110612]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:29 compute-0 sudo[110766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpllnvqaedfcsuwxazuefalthszzacca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160889.2297735-422-261527050339161/AnsiballZ_systemd.py'
Nov 26 12:41:29 compute-0 sudo[110766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:29 compute-0 python3.9[110768]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:41:29 compute-0 sudo[110766]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:29 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 26 12:41:29 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 26 12:41:29 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:29 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 12:41:29 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:41:29 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 26 12:41:29 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:41:29 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 26 12:41:29 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962300301s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 active pruub 199.162155151s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:29 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962102890s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 199.162155151s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:29 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 26 12:41:29 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:29 compute-0 ceph-mon[74966]: osdmap e110: 3 total, 3 up, 3 in
Nov 26 12:41:29 compute-0 ceph-mon[74966]: 8.9 scrub starts
Nov 26 12:41:29 compute-0 ceph-mon[74966]: 8.9 scrub ok
Nov 26 12:41:29 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 12:41:30 compute-0 sshd-session[103851]: Connection closed by 192.168.122.30 port 42018
Nov 26 12:41:30 compute-0 sshd-session[103848]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:41:30 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Nov 26 12:41:30 compute-0 systemd[1]: session-34.scope: Consumed 46.117s CPU time.
Nov 26 12:41:30 compute-0 systemd-logind[777]: Session 34 logged out. Waiting for processes to exit.
Nov 26 12:41:30 compute-0 systemd-logind[777]: Removed session 34.
Nov 26 12:41:30 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 26 12:41:30 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 26 12:41:30 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:41:30 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 26 12:41:30 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 26 12:41:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:41:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 26 12:41:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 26 12:41:30 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 26 12:41:30 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:30 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:30 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:30 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:30 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156332970s) [0] async=[0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 44'389 active pruub 202.327163696s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:30 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156107903s) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 202.327163696s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:30 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:30 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:30 compute-0 ceph-mon[74966]: pgmap v218: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:30 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 12:41:30 compute-0 ceph-mon[74966]: osdmap e111: 3 total, 3 up, 3 in
Nov 26 12:41:30 compute-0 ceph-mon[74966]: 11.2 scrub starts
Nov 26 12:41:30 compute-0 ceph-mon[74966]: 11.2 scrub ok
Nov 26 12:41:30 compute-0 ceph-mon[74966]: 7.6 scrub starts
Nov 26 12:41:30 compute-0 ceph-mon[74966]: 7.6 scrub ok
Nov 26 12:41:30 compute-0 ceph-mon[74966]: osdmap e112: 3 total, 3 up, 3 in
Nov 26 12:41:31 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 26 12:41:31 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 26 12:41:31 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 26 12:41:31 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 26 12:41:31 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 26 12:41:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 26 12:41:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 26 12:41:31 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 26 12:41:31 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:41:31 compute-0 ceph-mon[74966]: 7.e scrub starts
Nov 26 12:41:31 compute-0 ceph-mon[74966]: 7.e scrub ok
Nov 26 12:41:31 compute-0 ceph-mon[74966]: osdmap e113: 3 total, 3 up, 3 in
Nov 26 12:41:32 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.b deep-scrub starts
Nov 26 12:41:32 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.b deep-scrub ok
Nov 26 12:41:32 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:41:32 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 26 12:41:32 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 26 12:41:32 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 26 12:41:32 compute-0 ceph-mon[74966]: 7.16 scrub starts
Nov 26 12:41:32 compute-0 ceph-mon[74966]: 7.16 scrub ok
Nov 26 12:41:32 compute-0 ceph-mon[74966]: pgmap v221: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 26 12:41:32 compute-0 ceph-mon[74966]: 11.b deep-scrub starts
Nov 26 12:41:32 compute-0 ceph-mon[74966]: 11.b deep-scrub ok
Nov 26 12:41:33 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 26 12:41:33 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 26 12:41:33 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.470242500s) [1] async=[1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 44'389 active pruub 204.689910889s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:33 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.469997406s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 204.689910889s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:41:33 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:41:33 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:41:33 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 26 12:41:34 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 26 12:41:34 compute-0 ceph-mon[74966]: 7.17 scrub starts
Nov 26 12:41:34 compute-0 ceph-mon[74966]: 7.17 scrub ok
Nov 26 12:41:34 compute-0 ceph-mon[74966]: osdmap e114: 3 total, 3 up, 3 in
Nov 26 12:41:34 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 26 12:41:34 compute-0 ceph-mon[74966]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 26 12:41:34 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=114/115 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:41:35 compute-0 ceph-mon[74966]: pgmap v224: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 26 12:41:35 compute-0 ceph-mon[74966]: osdmap e115: 3 total, 3 up, 3 in
Nov 26 12:41:35 compute-0 sshd-session[110795]: Accepted publickey for zuul from 192.168.122.30 port 54072 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:41:35 compute-0 systemd-logind[777]: New session 35 of user zuul.
Nov 26 12:41:35 compute-0 systemd[1]: Started Session 35 of User zuul.
Nov 26 12:41:35 compute-0 sshd-session[110795]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:41:35
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 2 objects/s recovering
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:41:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:41:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:41:36 compute-0 python3.9[110948]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:41:36 compute-0 sudo[111102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yovfkddrihkytjimtglmpjeyxmavrqsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160896.453694-36-103789687696897/AnsiballZ_getent.py'
Nov 26 12:41:36 compute-0 sudo[111102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:36 compute-0 python3.9[111104]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 26 12:41:36 compute-0 sudo[111102]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:37 compute-0 ceph-mon[74966]: pgmap v226: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 2 objects/s recovering
Nov 26 12:41:37 compute-0 sudo[111255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxvfsrdlugdioheopqnkewzxmppuxhyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160897.0973003-48-42607247417478/AnsiballZ_setup.py'
Nov 26 12:41:37 compute-0 sudo[111255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:37 compute-0 python3.9[111257]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:41:37 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 26 12:41:37 compute-0 sudo[111255]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:37 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 26 12:41:37 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 26 12:41:37 compute-0 sudo[111339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdqwscumtptdchicdbfriikobevseljm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160897.0973003-48-42607247417478/AnsiballZ_dnf.py'
Nov 26 12:41:37 compute-0 sudo[111339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:38 compute-0 python3.9[111341]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 12:41:39 compute-0 ceph-mon[74966]: 7.19 scrub starts
Nov 26 12:41:39 compute-0 ceph-mon[74966]: 7.19 scrub ok
Nov 26 12:41:39 compute-0 ceph-mon[74966]: pgmap v227: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 26 12:41:39 compute-0 sudo[111339]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:39 compute-0 sudo[111492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmauprmrqaznlkyvqptuukqwpcsoivdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160899.2829041-62-220190444194402/AnsiballZ_dnf.py'
Nov 26 12:41:39 compute-0 sudo[111492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:39 compute-0 python3.9[111494]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:41:39 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 26 12:41:40 compute-0 sudo[111492]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:41:41 compute-0 ceph-mon[74966]: pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 26 12:41:41 compute-0 sudo[111645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtcshjkuwtbqvfuzhhreivpbsjzcisiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160900.7179437-70-270720602729128/AnsiballZ_systemd.py'
Nov 26 12:41:41 compute-0 sudo[111645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:41 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 26 12:41:41 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 26 12:41:41 compute-0 python3.9[111647]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 12:41:41 compute-0 sudo[111645]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:41 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 26 12:41:41 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 26 12:41:41 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 26 12:41:41 compute-0 python3.9[111800]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:41:42 compute-0 ceph-mon[74966]: 8.d scrub starts
Nov 26 12:41:42 compute-0 ceph-mon[74966]: 8.d scrub ok
Nov 26 12:41:42 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 26 12:41:42 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 26 12:41:42 compute-0 sudo[111950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egqghwpljzcpxxppgwsiindxtkzkdxog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160902.1137962-88-197075535698639/AnsiballZ_sefcontext.py'
Nov 26 12:41:42 compute-0 sudo[111950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:42 compute-0 python3.9[111952]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 26 12:41:42 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 26 12:41:42 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 26 12:41:42 compute-0 sudo[111950]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:43 compute-0 ceph-mon[74966]: 7.1d scrub starts
Nov 26 12:41:43 compute-0 ceph-mon[74966]: 7.1d scrub ok
Nov 26 12:41:43 compute-0 ceph-mon[74966]: pgmap v229: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 26 12:41:43 compute-0 ceph-mon[74966]: 7.1 scrub starts
Nov 26 12:41:43 compute-0 ceph-mon[74966]: 7.1 scrub ok
Nov 26 12:41:43 compute-0 python3.9[112102]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:41:43 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 26 12:41:43 compute-0 sudo[112258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnkvopfwpwqdkkfopfdlaldxtsvjjpfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160903.6209338-106-207793989068818/AnsiballZ_dnf.py'
Nov 26 12:41:43 compute-0 sudo[112258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:43 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 26 12:41:43 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 26 12:41:43 compute-0 python3.9[112260]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:41:44 compute-0 ceph-mon[74966]: 7.1e scrub starts
Nov 26 12:41:44 compute-0 ceph-mon[74966]: 7.1e scrub ok
Nov 26 12:41:44 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 26 12:41:44 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:41:44 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:41:45 compute-0 sudo[112258]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:45 compute-0 ceph-mon[74966]: 7.4 scrub starts
Nov 26 12:41:45 compute-0 ceph-mon[74966]: 7.4 scrub ok
Nov 26 12:41:45 compute-0 ceph-mon[74966]: pgmap v230: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 26 12:41:45 compute-0 ceph-mon[74966]: 7.3 scrub starts
Nov 26 12:41:45 compute-0 sudo[112411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjsivgfatopdrxehfrxcunqkdeltwxhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160905.1401174-114-272753683478580/AnsiballZ_command.py'
Nov 26 12:41:45 compute-0 sudo[112411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:45 compute-0 python3.9[112413]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:41:45 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 26 12:41:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:41:46 compute-0 ceph-mon[74966]: 7.3 scrub ok
Nov 26 12:41:46 compute-0 sudo[112411]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:46 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.8 deep-scrub starts
Nov 26 12:41:46 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.8 deep-scrub ok
Nov 26 12:41:46 compute-0 sudo[112698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkadpexvpzxanbrjctfsliljwdwzeoql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160906.2191951-122-215636154477494/AnsiballZ_file.py'
Nov 26 12:41:46 compute-0 sudo[112698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:46 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.1 deep-scrub starts
Nov 26 12:41:46 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.1 deep-scrub ok
Nov 26 12:41:46 compute-0 python3.9[112700]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 12:41:46 compute-0 sudo[112698]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:46 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 26 12:41:46 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 26 12:41:47 compute-0 ceph-mon[74966]: pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 26 12:41:47 compute-0 ceph-mon[74966]: 11.8 deep-scrub starts
Nov 26 12:41:47 compute-0 ceph-mon[74966]: 11.8 deep-scrub ok
Nov 26 12:41:47 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 26 12:41:47 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 26 12:41:47 compute-0 python3.9[112850]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:41:47 compute-0 sudo[113002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flhygffdouftxvizxuhhhrxchvsnwufq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160907.3776352-138-144761416404788/AnsiballZ_dnf.py'
Nov 26 12:41:47 compute-0 sudo[113002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:47 compute-0 python3.9[113004]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:41:47 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:48 compute-0 ceph-mon[74966]: 8.1 deep-scrub starts
Nov 26 12:41:48 compute-0 ceph-mon[74966]: 8.1 deep-scrub ok
Nov 26 12:41:48 compute-0 ceph-mon[74966]: 11.4 scrub starts
Nov 26 12:41:48 compute-0 ceph-mon[74966]: 11.4 scrub ok
Nov 26 12:41:48 compute-0 ceph-mon[74966]: 8.4 scrub starts
Nov 26 12:41:48 compute-0 ceph-mon[74966]: 8.4 scrub ok
Nov 26 12:41:48 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 26 12:41:48 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 26 12:41:48 compute-0 sudo[113002]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:48 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 26 12:41:48 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 26 12:41:49 compute-0 sudo[113155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plnrygcymldmzwifemxjohzqyscwtoeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160908.8666775-147-92042407811263/AnsiballZ_dnf.py'
Nov 26 12:41:49 compute-0 sudo[113155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:49 compute-0 ceph-mon[74966]: pgmap v232: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:49 compute-0 ceph-mon[74966]: 7.a scrub starts
Nov 26 12:41:49 compute-0 ceph-mon[74966]: 7.a scrub ok
Nov 26 12:41:49 compute-0 python3.9[113157]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:41:49 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 26 12:41:49 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 26 12:41:49 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:50 compute-0 ceph-mon[74966]: 7.f scrub starts
Nov 26 12:41:50 compute-0 ceph-mon[74966]: 7.f scrub ok
Nov 26 12:41:50 compute-0 sudo[113155]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:50 compute-0 sudo[113308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvlxagnrkstpviucyneeccvgxwtnmcei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160910.4474912-159-253035438738676/AnsiballZ_stat.py'
Nov 26 12:41:50 compute-0 sudo[113308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:50 compute-0 python3.9[113310]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:41:50 compute-0 sudo[113308]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:41:51 compute-0 ceph-mon[74966]: 7.9 scrub starts
Nov 26 12:41:51 compute-0 ceph-mon[74966]: 7.9 scrub ok
Nov 26 12:41:51 compute-0 ceph-mon[74966]: pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:51 compute-0 sudo[113462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hovpxjficzebpzdzxyqnvjtbcvnbjozi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160910.9023454-167-228502552635922/AnsiballZ_slurp.py'
Nov 26 12:41:51 compute-0 sudo[113462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:41:51 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 26 12:41:51 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 26 12:41:51 compute-0 python3.9[113464]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 26 12:41:51 compute-0 sudo[113462]: pam_unix(sudo:session): session closed for user root
Nov 26 12:41:51 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 26 12:41:51 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 26 12:41:51 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:51 compute-0 sshd-session[110798]: Connection closed by 192.168.122.30 port 54072
Nov 26 12:41:51 compute-0 sshd-session[110795]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:41:51 compute-0 systemd-logind[777]: Session 35 logged out. Waiting for processes to exit.
Nov 26 12:41:51 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Nov 26 12:41:51 compute-0 systemd[1]: session-35.scope: Consumed 12.764s CPU time.
Nov 26 12:41:51 compute-0 systemd-logind[777]: Removed session 35.
Nov 26 12:41:52 compute-0 ceph-mon[74966]: 7.8 scrub starts
Nov 26 12:41:52 compute-0 ceph-mon[74966]: 7.8 scrub ok
Nov 26 12:41:53 compute-0 ceph-mon[74966]: 11.6 scrub starts
Nov 26 12:41:53 compute-0 ceph-mon[74966]: 11.6 scrub ok
Nov 26 12:41:53 compute-0 ceph-mon[74966]: pgmap v234: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:53 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 26 12:41:53 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 26 12:41:53 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:54 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 26 12:41:54 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 26 12:41:54 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 26 12:41:54 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 26 12:41:55 compute-0 ceph-mon[74966]: 8.18 scrub starts
Nov 26 12:41:55 compute-0 ceph-mon[74966]: 8.18 scrub ok
Nov 26 12:41:55 compute-0 ceph-mon[74966]: pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:55 compute-0 ceph-mon[74966]: 11.18 scrub starts
Nov 26 12:41:55 compute-0 ceph-mon[74966]: 11.18 scrub ok
Nov 26 12:41:55 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:41:56 compute-0 ceph-mon[74966]: 8.1f scrub starts
Nov 26 12:41:56 compute-0 ceph-mon[74966]: 8.1f scrub ok
Nov 26 12:41:56 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 26 12:41:56 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 26 12:41:56 compute-0 sshd-session[113489]: Accepted publickey for zuul from 192.168.122.30 port 56928 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:41:56 compute-0 systemd-logind[777]: New session 36 of user zuul.
Nov 26 12:41:56 compute-0 systemd[1]: Started Session 36 of User zuul.
Nov 26 12:41:56 compute-0 sshd-session[113489]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:41:56 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 26 12:41:56 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 26 12:41:57 compute-0 ceph-mon[74966]: pgmap v236: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:57 compute-0 ceph-mon[74966]: 8.1b scrub starts
Nov 26 12:41:57 compute-0 ceph-mon[74966]: 8.1b scrub ok
Nov 26 12:41:57 compute-0 python3.9[113642]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:41:57 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:58 compute-0 ceph-mon[74966]: 8.3 scrub starts
Nov 26 12:41:58 compute-0 ceph-mon[74966]: 8.3 scrub ok
Nov 26 12:41:58 compute-0 python3.9[113796]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:41:59 compute-0 python3.9[113989]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:41:59 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 26 12:41:59 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 26 12:41:59 compute-0 ceph-mon[74966]: pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:41:59 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 26 12:41:59 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 26 12:41:59 compute-0 sshd-session[113492]: Connection closed by 192.168.122.30 port 56928
Nov 26 12:41:59 compute-0 sshd-session[113489]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:41:59 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Nov 26 12:41:59 compute-0 systemd[1]: session-36.scope: Consumed 1.947s CPU time.
Nov 26 12:41:59 compute-0 systemd-logind[777]: Session 36 logged out. Waiting for processes to exit.
Nov 26 12:41:59 compute-0 systemd-logind[777]: Removed session 36.
Nov 26 12:41:59 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:00 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Nov 26 12:42:00 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Nov 26 12:42:00 compute-0 ceph-mon[74966]: 8.1d scrub starts
Nov 26 12:42:00 compute-0 ceph-mon[74966]: 8.1d scrub ok
Nov 26 12:42:00 compute-0 ceph-mon[74966]: 11.1b scrub starts
Nov 26 12:42:00 compute-0 ceph-mon[74966]: 11.1b scrub ok
Nov 26 12:42:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:42:01 compute-0 ceph-mon[74966]: pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:01 compute-0 ceph-mon[74966]: 7.13 deep-scrub starts
Nov 26 12:42:01 compute-0 ceph-mon[74966]: 7.13 deep-scrub ok
Nov 26 12:42:01 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 26 12:42:01 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 26 12:42:01 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:02 compute-0 ceph-mon[74966]: 11.1c scrub starts
Nov 26 12:42:02 compute-0 ceph-mon[74966]: 11.1c scrub ok
Nov 26 12:42:03 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 26 12:42:03 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 26 12:42:03 compute-0 ceph-mon[74966]: pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:03 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 26 12:42:03 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 26 12:42:03 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:04 compute-0 ceph-mon[74966]: 8.1a scrub starts
Nov 26 12:42:04 compute-0 ceph-mon[74966]: 8.1a scrub ok
Nov 26 12:42:04 compute-0 sshd-session[114015]: Accepted publickey for zuul from 192.168.122.30 port 59648 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:42:04 compute-0 systemd-logind[777]: New session 37 of user zuul.
Nov 26 12:42:04 compute-0 systemd[1]: Started Session 37 of User zuul.
Nov 26 12:42:04 compute-0 sshd-session[114015]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:42:05 compute-0 ceph-mon[74966]: 8.5 scrub starts
Nov 26 12:42:05 compute-0 ceph-mon[74966]: 8.5 scrub ok
Nov 26 12:42:05 compute-0 ceph-mon[74966]: pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 26 12:42:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 26 12:42:05 compute-0 python3.9[114168]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:42:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:42:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:42:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:42:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:42:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:42:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:42:05 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:42:06 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 26 12:42:06 compute-0 ceph-mon[74966]: 11.1e scrub starts
Nov 26 12:42:06 compute-0 ceph-mon[74966]: 11.1e scrub ok
Nov 26 12:42:06 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 26 12:42:06 compute-0 python3.9[114322]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:42:06 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 26 12:42:06 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 26 12:42:06 compute-0 sudo[114476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfunnqngpzmuxqdqiwunyzjdkuyntlbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160926.5880957-40-30443015896069/AnsiballZ_setup.py'
Nov 26 12:42:06 compute-0 sudo[114476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:06 compute-0 python3.9[114478]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:42:07 compute-0 ceph-mon[74966]: pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:07 compute-0 ceph-mon[74966]: 11.19 scrub starts
Nov 26 12:42:07 compute-0 ceph-mon[74966]: 11.19 scrub ok
Nov 26 12:42:07 compute-0 ceph-mon[74966]: 8.7 scrub starts
Nov 26 12:42:07 compute-0 ceph-mon[74966]: 8.7 scrub ok
Nov 26 12:42:07 compute-0 sudo[114476]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:07 compute-0 sudo[114560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wblcovwabciwfakzmttttymkcifynyed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160926.5880957-40-30443015896069/AnsiballZ_dnf.py'
Nov 26 12:42:07 compute-0 sudo[114560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:07 compute-0 python3.9[114562]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:42:07 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 26 12:42:07 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 26 12:42:07 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:08 compute-0 ceph-mon[74966]: 8.8 scrub starts
Nov 26 12:42:08 compute-0 ceph-mon[74966]: 8.8 scrub ok
Nov 26 12:42:08 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 26 12:42:08 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 26 12:42:08 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 26 12:42:08 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 26 12:42:08 compute-0 sudo[114560]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:08 compute-0 sudo[114713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgbclvtzmvpxsfwxtfqcrcdhjsbsyzcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160928.7553024-52-103249965684511/AnsiballZ_setup.py'
Nov 26 12:42:08 compute-0 sudo[114713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:09 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 26 12:42:09 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 26 12:42:09 compute-0 ceph-mon[74966]: pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:09 compute-0 ceph-mon[74966]: 11.10 scrub starts
Nov 26 12:42:09 compute-0 ceph-mon[74966]: 11.10 scrub ok
Nov 26 12:42:09 compute-0 ceph-mon[74966]: 11.11 scrub starts
Nov 26 12:42:09 compute-0 ceph-mon[74966]: 11.11 scrub ok
Nov 26 12:42:09 compute-0 python3.9[114715]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:42:09 compute-0 sudo[114713]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:09 compute-0 sudo[114908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdlfzbxabmmqhmrozvfwbecmpndyicaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160929.52416-63-224606446565830/AnsiballZ_file.py'
Nov 26 12:42:09 compute-0 sudo[114908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:09 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:09 compute-0 python3.9[114910]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:09 compute-0 sudo[114908]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:10 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.d deep-scrub starts
Nov 26 12:42:10 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.d deep-scrub ok
Nov 26 12:42:10 compute-0 ceph-mon[74966]: 8.6 scrub starts
Nov 26 12:42:10 compute-0 ceph-mon[74966]: 8.6 scrub ok
Nov 26 12:42:10 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1c deep-scrub starts
Nov 26 12:42:10 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1c deep-scrub ok
Nov 26 12:42:10 compute-0 sudo[115060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajbbtbeegsxvibbfonwluhgdemdxxkrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160930.0972278-71-70684482938088/AnsiballZ_command.py'
Nov 26 12:42:10 compute-0 sudo[115060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:10 compute-0 python3.9[115062]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:42:10 compute-0 sudo[115060]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:42:10 compute-0 sudo[115221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvigembrpzxisiyebucfsceseebnvplt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160930.6884391-79-186639133704817/AnsiballZ_stat.py'
Nov 26 12:42:10 compute-0 sudo[115221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:11 compute-0 python3.9[115223]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:11 compute-0 ceph-mon[74966]: pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:11 compute-0 ceph-mon[74966]: 10.d deep-scrub starts
Nov 26 12:42:11 compute-0 ceph-mon[74966]: 10.d deep-scrub ok
Nov 26 12:42:11 compute-0 ceph-mon[74966]: 8.1c deep-scrub starts
Nov 26 12:42:11 compute-0 ceph-mon[74966]: 8.1c deep-scrub ok
Nov 26 12:42:11 compute-0 sudo[115221]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:11 compute-0 sudo[115299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnoipcajkxituuhckkfqaqhlqdxziwpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160930.6884391-79-186639133704817/AnsiballZ_file.py'
Nov 26 12:42:11 compute-0 sudo[115299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:11 compute-0 python3.9[115301]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:11 compute-0 sudo[115299]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:11 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.a deep-scrub starts
Nov 26 12:42:11 compute-0 sudo[115451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvififpipsdqircysvtrkjwawsykekpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160931.5519488-91-182873370284042/AnsiballZ_stat.py'
Nov 26 12:42:11 compute-0 sudo[115451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:11 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.a deep-scrub ok
Nov 26 12:42:11 compute-0 python3.9[115453]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:11 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:11 compute-0 sudo[115451]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:12 compute-0 sudo[115529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiukzuffpprwfzjygrknvmjgxhigygxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160931.5519488-91-182873370284042/AnsiballZ_file.py'
Nov 26 12:42:12 compute-0 sudo[115529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:12 compute-0 ceph-mon[74966]: 8.a deep-scrub starts
Nov 26 12:42:12 compute-0 ceph-mon[74966]: 8.a deep-scrub ok
Nov 26 12:42:12 compute-0 python3.9[115531]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:42:12 compute-0 sudo[115529]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:12 compute-0 sudo[115681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psgujufebtcqtxftdmwzhiafhatusaru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160932.376181-104-92568144579470/AnsiballZ_ini_file.py'
Nov 26 12:42:12 compute-0 sudo[115681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:12 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.2 deep-scrub starts
Nov 26 12:42:12 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.2 deep-scrub ok
Nov 26 12:42:12 compute-0 python3.9[115683]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:42:12 compute-0 sudo[115681]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:13 compute-0 sudo[115833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnourgrcqvyeuqvonmfxlskzyijfngba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160932.9398167-104-268062324507933/AnsiballZ_ini_file.py'
Nov 26 12:42:13 compute-0 sudo[115833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:13 compute-0 ceph-mon[74966]: pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:13 compute-0 ceph-mon[74966]: 9.2 deep-scrub starts
Nov 26 12:42:13 compute-0 ceph-mon[74966]: 9.2 deep-scrub ok
Nov 26 12:42:13 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 26 12:42:13 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 26 12:42:13 compute-0 python3.9[115835]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:42:13 compute-0 sudo[115833]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:13 compute-0 sudo[115985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltlvtfnkznkwnbohsyhbgjlalkexwegb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160933.36292-104-29234675904857/AnsiballZ_ini_file.py'
Nov 26 12:42:13 compute-0 sudo[115985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:13 compute-0 python3.9[115987]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:42:13 compute-0 sudo[115985]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:13 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:13 compute-0 sudo[116137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auqhmhkcoxymaijexvkoufjuvekecbzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160933.7848177-104-46079684828852/AnsiballZ_ini_file.py'
Nov 26 12:42:13 compute-0 sudo[116137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:14 compute-0 python3.9[116139]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:42:14 compute-0 sudo[116137]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:14 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 26 12:42:14 compute-0 ceph-mon[74966]: 8.12 scrub starts
Nov 26 12:42:14 compute-0 ceph-mon[74966]: 8.12 scrub ok
Nov 26 12:42:14 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 26 12:42:14 compute-0 sudo[116289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiegdarggdmrdiismszemroqsznadvok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160934.2826958-135-34620938892004/AnsiballZ_dnf.py'
Nov 26 12:42:14 compute-0 sudo[116289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:14 compute-0 python3.9[116291]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:42:15 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Nov 26 12:42:15 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Nov 26 12:42:15 compute-0 ceph-mon[74966]: pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:15 compute-0 ceph-mon[74966]: 11.12 scrub starts
Nov 26 12:42:15 compute-0 ceph-mon[74966]: 11.12 scrub ok
Nov 26 12:42:15 compute-0 sudo[116289]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:15 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:42:16 compute-0 sudo[116442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tviwvlcqvgztsohokazexggpuiudsboc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160935.9127903-146-224125737564652/AnsiballZ_setup.py'
Nov 26 12:42:16 compute-0 sudo[116442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:16 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 26 12:42:16 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 26 12:42:16 compute-0 ceph-mon[74966]: 10.7 deep-scrub starts
Nov 26 12:42:16 compute-0 ceph-mon[74966]: 10.7 deep-scrub ok
Nov 26 12:42:16 compute-0 python3.9[116444]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:42:16 compute-0 sudo[116442]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:16 compute-0 sudo[116596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcakxkxkiyicskpagcczabilvaktxohy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160936.4818974-154-225359063553878/AnsiballZ_stat.py'
Nov 26 12:42:16 compute-0 sudo[116596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:16 compute-0 python3.9[116598]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:42:16 compute-0 sudo[116596]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:17 compute-0 sudo[116748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-witvfuzpabrneoiehwgedoysnsrwmbve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160936.9545734-163-67998393430812/AnsiballZ_stat.py'
Nov 26 12:42:17 compute-0 sudo[116748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:17 compute-0 ceph-mon[74966]: pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:17 compute-0 ceph-mon[74966]: 10.4 scrub starts
Nov 26 12:42:17 compute-0 ceph-mon[74966]: 10.4 scrub ok
Nov 26 12:42:17 compute-0 python3.9[116750]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:42:17 compute-0 sudo[116748]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:17 compute-0 sudo[116900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mafgryqfdncwvfjkxaqbawwzbbprapeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160937.4700203-173-132857043621659/AnsiballZ_command.py'
Nov 26 12:42:17 compute-0 sudo[116900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:17 compute-0 python3.9[116902]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:42:17 compute-0 sudo[116900]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:17 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:18 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 26 12:42:18 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 26 12:42:18 compute-0 sudo[117053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybykpbpqjjxgqwzicahmqqzwhawuznbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160937.9754984-183-43844854455955/AnsiballZ_service_facts.py'
Nov 26 12:42:18 compute-0 sudo[117053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:18 compute-0 python3.9[117055]: ansible-service_facts Invoked
Nov 26 12:42:18 compute-0 network[117072]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 12:42:18 compute-0 network[117073]: 'network-scripts' will be removed from distribution in near future.
Nov 26 12:42:18 compute-0 network[117074]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 12:42:19 compute-0 ceph-mon[74966]: pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:19 compute-0 ceph-mon[74966]: 11.3 scrub starts
Nov 26 12:42:19 compute-0 ceph-mon[74966]: 11.3 scrub ok
Nov 26 12:42:19 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 26 12:42:19 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 26 12:42:19 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:20 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 26 12:42:20 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 26 12:42:20 compute-0 ceph-mon[74966]: 8.11 scrub starts
Nov 26 12:42:20 compute-0 ceph-mon[74966]: 8.11 scrub ok
Nov 26 12:42:20 compute-0 sudo[117053]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:20 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.5 deep-scrub starts
Nov 26 12:42:20 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.5 deep-scrub ok
Nov 26 12:42:20 compute-0 sudo[117357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcagcqjifomxyucywdhxbqnmcsukycoa ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764160940.6344378-198-207225948357777/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764160940.6344378-198-207225948357777/args'
Nov 26 12:42:20 compute-0 sudo[117357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:20 compute-0 sudo[117357]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:42:21 compute-0 ceph-mon[74966]: pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:21 compute-0 ceph-mon[74966]: 10.8 scrub starts
Nov 26 12:42:21 compute-0 ceph-mon[74966]: 10.8 scrub ok
Nov 26 12:42:21 compute-0 ceph-mon[74966]: 7.5 deep-scrub starts
Nov 26 12:42:21 compute-0 ceph-mon[74966]: 7.5 deep-scrub ok
Nov 26 12:42:21 compute-0 sudo[117531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrcxkspmkejiettezzpdyipdacbdohfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160941.0242972-209-134324907051176/AnsiballZ_dnf.py'
Nov 26 12:42:21 compute-0 sudo[117531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:21 compute-0 sudo[117515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:42:21 compute-0 sudo[117515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:21 compute-0 sudo[117515]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:21 compute-0 sudo[117552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:42:21 compute-0 sudo[117552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:21 compute-0 sudo[117552]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:21 compute-0 sudo[117577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:42:21 compute-0 sudo[117577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:21 compute-0 sudo[117577]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:21 compute-0 sudo[117602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:42:21 compute-0 sudo[117602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:21 compute-0 python3.9[117549]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:42:21 compute-0 sudo[117602]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:42:21 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:42:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:42:21 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:42:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:42:21 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:42:21 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 702f5c97-136a-4d01-98ff-d37ac22665e4 does not exist
Nov 26 12:42:21 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 730d1a08-54ec-4750-9a72-7a5eb86e42c8 does not exist
Nov 26 12:42:21 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 7ae52e54-2f0e-4949-bc1f-36d50358ff46 does not exist
Nov 26 12:42:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:42:21 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:42:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:42:21 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:42:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:42:21 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:42:21 compute-0 sudo[117656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:42:21 compute-0 sudo[117656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:21 compute-0 sudo[117656]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:21 compute-0 sudo[117681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:42:21 compute-0 sudo[117681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:21 compute-0 sudo[117681]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:21 compute-0 sudo[117706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:42:21 compute-0 sudo[117706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:21 compute-0 sudo[117706]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:21 compute-0 sudo[117731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:42:21 compute-0 sudo[117731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:21 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:22 compute-0 podman[117788]: 2025-11-26 12:42:22.068994181 +0000 UTC m=+0.027985860 container create 13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:42:22 compute-0 systemd[1]: Started libpod-conmon-13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82.scope.
Nov 26 12:42:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:42:22 compute-0 podman[117788]: 2025-11-26 12:42:22.122092594 +0000 UTC m=+0.081084283 container init 13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 12:42:22 compute-0 podman[117788]: 2025-11-26 12:42:22.126456789 +0000 UTC m=+0.085448458 container start 13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:42:22 compute-0 podman[117788]: 2025-11-26 12:42:22.127523769 +0000 UTC m=+0.086515439 container attach 13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 12:42:22 compute-0 magical_matsumoto[117801]: 167 167
Nov 26 12:42:22 compute-0 systemd[1]: libpod-13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82.scope: Deactivated successfully.
Nov 26 12:42:22 compute-0 conmon[117801]: conmon 13638a4c1a17906637aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82.scope/container/memory.events
Nov 26 12:42:22 compute-0 podman[117788]: 2025-11-26 12:42:22.131176332 +0000 UTC m=+0.090168001 container died 13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:42:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9438a59dba94669e26cf8f4704fad680d3b92e38f098fb69a0147e89db505a4a-merged.mount: Deactivated successfully.
Nov 26 12:42:22 compute-0 podman[117788]: 2025-11-26 12:42:22.153338216 +0000 UTC m=+0.112329885 container remove 13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 12:42:22 compute-0 podman[117788]: 2025-11-26 12:42:22.057780308 +0000 UTC m=+0.016771998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:42:22 compute-0 systemd[1]: libpod-conmon-13638a4c1a17906637aa5177891c9dc696a480cc23fa0567b88e19508a609e82.scope: Deactivated successfully.
Nov 26 12:42:22 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:42:22 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:42:22 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:42:22 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:42:22 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:42:22 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:42:22 compute-0 podman[117823]: 2025-11-26 12:42:22.265247096 +0000 UTC m=+0.027915798 container create 3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:42:22 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 26 12:42:22 compute-0 systemd[1]: Started libpod-conmon-3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143.scope.
Nov 26 12:42:22 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 26 12:42:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca6a878efe8a2b85eae66dbeca18d899d4c919ab2d0a706ac9385a940c3f64a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca6a878efe8a2b85eae66dbeca18d899d4c919ab2d0a706ac9385a940c3f64a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca6a878efe8a2b85eae66dbeca18d899d4c919ab2d0a706ac9385a940c3f64a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca6a878efe8a2b85eae66dbeca18d899d4c919ab2d0a706ac9385a940c3f64a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca6a878efe8a2b85eae66dbeca18d899d4c919ab2d0a706ac9385a940c3f64a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:42:22 compute-0 podman[117823]: 2025-11-26 12:42:22.311391886 +0000 UTC m=+0.074060586 container init 3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_boyd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 26 12:42:22 compute-0 podman[117823]: 2025-11-26 12:42:22.318343305 +0000 UTC m=+0.081012006 container start 3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_boyd, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:42:22 compute-0 podman[117823]: 2025-11-26 12:42:22.319334934 +0000 UTC m=+0.082003634 container attach 3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_boyd, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:42:22 compute-0 podman[117823]: 2025-11-26 12:42:22.253094275 +0000 UTC m=+0.015762976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:42:22 compute-0 sudo[117531]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:22 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 26 12:42:22 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 26 12:42:23 compute-0 sudo[118013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nydbhzqqcsdlcctgrsbastyxqjucqnnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160942.6600728-222-164546865608183/AnsiballZ_package_facts.py'
Nov 26 12:42:23 compute-0 optimistic_boyd[117836]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:42:23 compute-0 optimistic_boyd[117836]: --> relative data size: 1.0
Nov 26 12:42:23 compute-0 optimistic_boyd[117836]: --> All data devices are unavailable
Nov 26 12:42:23 compute-0 sudo[118013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:23 compute-0 systemd[1]: libpod-3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143.scope: Deactivated successfully.
Nov 26 12:42:23 compute-0 podman[117823]: 2025-11-26 12:42:23.127673053 +0000 UTC m=+0.890341755 container died 3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-aca6a878efe8a2b85eae66dbeca18d899d4c919ab2d0a706ac9385a940c3f64a-merged.mount: Deactivated successfully.
Nov 26 12:42:23 compute-0 podman[117823]: 2025-11-26 12:42:23.162231147 +0000 UTC m=+0.924899848 container remove 3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_boyd, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:42:23 compute-0 ceph-mon[74966]: pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:23 compute-0 ceph-mon[74966]: 11.9 scrub starts
Nov 26 12:42:23 compute-0 ceph-mon[74966]: 11.9 scrub ok
Nov 26 12:42:23 compute-0 ceph-mon[74966]: 8.13 scrub starts
Nov 26 12:42:23 compute-0 ceph-mon[74966]: 8.13 scrub ok
Nov 26 12:42:23 compute-0 systemd[1]: libpod-conmon-3b8fb9cac998b95baf23837c8fc81a0810f0ff0876406881b64fdc8ba8577143.scope: Deactivated successfully.
Nov 26 12:42:23 compute-0 sudo[117731]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:23 compute-0 sudo[118028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:42:23 compute-0 sudo[118028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:23 compute-0 sudo[118028]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:23 compute-0 sudo[118053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:42:23 compute-0 sudo[118053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:23 compute-0 sudo[118053]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:23 compute-0 sudo[118078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:42:23 compute-0 sudo[118078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:23 compute-0 sudo[118078]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:23 compute-0 sudo[118103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:42:23 compute-0 sudo[118103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:23 compute-0 python3.9[118016]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 26 12:42:23 compute-0 sudo[118013]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:23 compute-0 podman[118160]: 2025-11-26 12:42:23.588696653 +0000 UTC m=+0.031464745 container create 26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 12:42:23 compute-0 systemd[1]: Started libpod-conmon-26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc.scope.
Nov 26 12:42:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:42:23 compute-0 podman[118160]: 2025-11-26 12:42:23.646545889 +0000 UTC m=+0.089313981 container init 26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:42:23 compute-0 podman[118160]: 2025-11-26 12:42:23.650816207 +0000 UTC m=+0.093584289 container start 26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 12:42:23 compute-0 podman[118160]: 2025-11-26 12:42:23.652212227 +0000 UTC m=+0.094980310 container attach 26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:42:23 compute-0 goofy_goldstine[118197]: 167 167
Nov 26 12:42:23 compute-0 systemd[1]: libpod-26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc.scope: Deactivated successfully.
Nov 26 12:42:23 compute-0 podman[118160]: 2025-11-26 12:42:23.654540765 +0000 UTC m=+0.097308847 container died 26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a2e2f62a458a10aab5b1bfc42da666a4a878c953d29757c20d7c64b19bf5961-merged.mount: Deactivated successfully.
Nov 26 12:42:23 compute-0 podman[118160]: 2025-11-26 12:42:23.574618254 +0000 UTC m=+0.017386356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:42:23 compute-0 podman[118160]: 2025-11-26 12:42:23.672663086 +0000 UTC m=+0.115431169 container remove 26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:42:23 compute-0 systemd[1]: libpod-conmon-26c23ef4d6b0b2476e3b361d0c123eed3e7a01b00e3ae58733571fd2060567dc.scope: Deactivated successfully.
Nov 26 12:42:23 compute-0 podman[118219]: 2025-11-26 12:42:23.786440348 +0000 UTC m=+0.029062608 container create 1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:42:23 compute-0 systemd[1]: Started libpod-conmon-1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555.scope.
Nov 26 12:42:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49962f512c783eb33cc567d1a572b6d18584e531ddde4865c51f1c3b953247ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49962f512c783eb33cc567d1a572b6d18584e531ddde4865c51f1c3b953247ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49962f512c783eb33cc567d1a572b6d18584e531ddde4865c51f1c3b953247ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49962f512c783eb33cc567d1a572b6d18584e531ddde4865c51f1c3b953247ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:42:23 compute-0 podman[118219]: 2025-11-26 12:42:23.844290635 +0000 UTC m=+0.086912916 container init 1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_spence, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 26 12:42:23 compute-0 podman[118219]: 2025-11-26 12:42:23.850066741 +0000 UTC m=+0.092689002 container start 1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:42:23 compute-0 podman[118219]: 2025-11-26 12:42:23.85120223 +0000 UTC m=+0.093824490 container attach 1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 12:42:23 compute-0 podman[118219]: 2025-11-26 12:42:23.774921892 +0000 UTC m=+0.017544173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:42:23 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:24 compute-0 sudo[118362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqwihujzqbzwflykitidnrscyrxffrwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160943.8472-232-52798158002430/AnsiballZ_stat.py'
Nov 26 12:42:24 compute-0 sudo[118362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:24 compute-0 python3.9[118364]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:24 compute-0 sudo[118362]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:24 compute-0 sudo[118440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwomvyxwyncpwuggiuowuxyyymltnalm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160943.8472-232-52798158002430/AnsiballZ_file.py'
Nov 26 12:42:24 compute-0 sudo[118440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:24 compute-0 reverent_spence[118232]: {
Nov 26 12:42:24 compute-0 reverent_spence[118232]:     "0": [
Nov 26 12:42:24 compute-0 reverent_spence[118232]:         {
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "devices": [
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "/dev/loop3"
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             ],
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "lv_name": "ceph_lv0",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "lv_size": "21470642176",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "name": "ceph_lv0",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "tags": {
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.cluster_name": "ceph",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.crush_device_class": "",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.encrypted": "0",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.osd_id": "0",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.type": "block",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.vdo": "0"
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             },
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "type": "block",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "vg_name": "ceph_vg0"
Nov 26 12:42:24 compute-0 reverent_spence[118232]:         }
Nov 26 12:42:24 compute-0 reverent_spence[118232]:     ],
Nov 26 12:42:24 compute-0 reverent_spence[118232]:     "1": [
Nov 26 12:42:24 compute-0 reverent_spence[118232]:         {
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "devices": [
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "/dev/loop4"
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             ],
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "lv_name": "ceph_lv1",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "lv_size": "21470642176",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "name": "ceph_lv1",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "tags": {
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.cluster_name": "ceph",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.crush_device_class": "",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.encrypted": "0",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.osd_id": "1",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.type": "block",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.vdo": "0"
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             },
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "type": "block",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "vg_name": "ceph_vg1"
Nov 26 12:42:24 compute-0 reverent_spence[118232]:         }
Nov 26 12:42:24 compute-0 reverent_spence[118232]:     ],
Nov 26 12:42:24 compute-0 reverent_spence[118232]:     "2": [
Nov 26 12:42:24 compute-0 reverent_spence[118232]:         {
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "devices": [
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "/dev/loop5"
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             ],
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "lv_name": "ceph_lv2",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "lv_size": "21470642176",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "name": "ceph_lv2",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "tags": {
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.cluster_name": "ceph",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.crush_device_class": "",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.encrypted": "0",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.osd_id": "2",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.type": "block",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:                 "ceph.vdo": "0"
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             },
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "type": "block",
Nov 26 12:42:24 compute-0 reverent_spence[118232]:             "vg_name": "ceph_vg2"
Nov 26 12:42:24 compute-0 reverent_spence[118232]:         }
Nov 26 12:42:24 compute-0 reverent_spence[118232]:     ]
Nov 26 12:42:24 compute-0 reverent_spence[118232]: }
Nov 26 12:42:24 compute-0 systemd[1]: libpod-1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555.scope: Deactivated successfully.
Nov 26 12:42:24 compute-0 podman[118219]: 2025-11-26 12:42:24.519147261 +0000 UTC m=+0.761769522 container died 1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_spence, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 12:42:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-49962f512c783eb33cc567d1a572b6d18584e531ddde4865c51f1c3b953247ee-merged.mount: Deactivated successfully.
Nov 26 12:42:24 compute-0 python3.9[118442]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:24 compute-0 podman[118219]: 2025-11-26 12:42:24.54954242 +0000 UTC m=+0.792164681 container remove 1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:42:24 compute-0 sudo[118440]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:24 compute-0 systemd[1]: libpod-conmon-1e68bea15a2311f4cf64096dbeb375f9224566c4f97e2e46105534a6e3ee6555.scope: Deactivated successfully.
Nov 26 12:42:24 compute-0 sudo[118103]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:24 compute-0 sudo[118478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:42:24 compute-0 sudo[118478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:24 compute-0 sudo[118478]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:24 compute-0 sudo[118507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:42:24 compute-0 sudo[118507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:24 compute-0 sudo[118507]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:24 compute-0 sudo[118532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:42:24 compute-0 sudo[118532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:24 compute-0 sudo[118532]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:24 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 26 12:42:24 compute-0 sudo[118584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:42:24 compute-0 sudo[118584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:24 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 26 12:42:24 compute-0 sudo[118716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eldfijtkhivaweshlxcftjqnkmqrhpck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160944.701047-244-187415641428907/AnsiballZ_stat.py'
Nov 26 12:42:24 compute-0 sudo[118716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:24 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 26 12:42:24 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 26 12:42:24 compute-0 podman[118741]: 2025-11-26 12:42:24.987438538 +0000 UTC m=+0.027406869 container create 7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 12:42:25 compute-0 systemd[1]: Started libpod-conmon-7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692.scope.
Nov 26 12:42:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:42:25 compute-0 python3.9[118721]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:25 compute-0 podman[118741]: 2025-11-26 12:42:25.042316855 +0000 UTC m=+0.082285196 container init 7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 12:42:25 compute-0 podman[118741]: 2025-11-26 12:42:25.047704238 +0000 UTC m=+0.087672569 container start 7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 12:42:25 compute-0 podman[118741]: 2025-11-26 12:42:25.048744298 +0000 UTC m=+0.088712629 container attach 7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:42:25 compute-0 vigilant_curran[118754]: 167 167
Nov 26 12:42:25 compute-0 systemd[1]: libpod-7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692.scope: Deactivated successfully.
Nov 26 12:42:25 compute-0 conmon[118754]: conmon 7716cb19720cf415605a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692.scope/container/memory.events
Nov 26 12:42:25 compute-0 podman[118741]: 2025-11-26 12:42:25.051825043 +0000 UTC m=+0.091793394 container died 7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:42:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-21a8e92cd3bf203fe401307c7a4ad9e76998a194433ede8f79f3f8c5247592ae-merged.mount: Deactivated successfully.
Nov 26 12:42:25 compute-0 sudo[118716]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:25 compute-0 podman[118741]: 2025-11-26 12:42:24.976543977 +0000 UTC m=+0.016512318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:42:25 compute-0 podman[118741]: 2025-11-26 12:42:25.076158689 +0000 UTC m=+0.116127021 container remove 7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 12:42:25 compute-0 systemd[1]: libpod-conmon-7716cb19720cf415605a602f10e7cff3910898560e7dc3568ccd037740b2a692.scope: Deactivated successfully.
Nov 26 12:42:25 compute-0 ceph-mon[74966]: pgmap v250: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:25 compute-0 ceph-mon[74966]: 9.4 scrub starts
Nov 26 12:42:25 compute-0 ceph-mon[74966]: 9.4 scrub ok
Nov 26 12:42:25 compute-0 podman[118807]: 2025-11-26 12:42:25.193348391 +0000 UTC m=+0.032716164 container create ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 12:42:25 compute-0 systemd[1]: Started libpod-conmon-ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada.scope.
Nov 26 12:42:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:42:25 compute-0 sudo[118864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtatrzyrslwpmahnaynmlbxpjwmhanbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160944.701047-244-187415641428907/AnsiballZ_file.py'
Nov 26 12:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155894c4bb4a346524e966d2a73f8640160aadea39ecbe71ed760b4cebc3c0db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155894c4bb4a346524e966d2a73f8640160aadea39ecbe71ed760b4cebc3c0db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155894c4bb4a346524e966d2a73f8640160aadea39ecbe71ed760b4cebc3c0db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155894c4bb4a346524e966d2a73f8640160aadea39ecbe71ed760b4cebc3c0db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:42:25 compute-0 sudo[118864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:25 compute-0 podman[118807]: 2025-11-26 12:42:25.248506976 +0000 UTC m=+0.087874748 container init ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:42:25 compute-0 podman[118807]: 2025-11-26 12:42:25.253470711 +0000 UTC m=+0.092838474 container start ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:42:25 compute-0 podman[118807]: 2025-11-26 12:42:25.254505251 +0000 UTC m=+0.093873023 container attach ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:42:25 compute-0 podman[118807]: 2025-11-26 12:42:25.181474775 +0000 UTC m=+0.020842558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:42:25 compute-0 python3.9[118869]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:25 compute-0 sudo[118864]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:25 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:42:25 compute-0 sudo[119046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nppqorofdofzinwlqmzlsqrkzwzggojb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160945.713801-262-106414338366548/AnsiballZ_lineinfile.py'
Nov 26 12:42:26 compute-0 sudo[119046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:26 compute-0 goofy_euler[118865]: {
Nov 26 12:42:26 compute-0 goofy_euler[118865]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:42:26 compute-0 goofy_euler[118865]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:42:26 compute-0 goofy_euler[118865]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:42:26 compute-0 goofy_euler[118865]:         "osd_id": 1,
Nov 26 12:42:26 compute-0 goofy_euler[118865]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:42:26 compute-0 goofy_euler[118865]:         "type": "bluestore"
Nov 26 12:42:26 compute-0 goofy_euler[118865]:     },
Nov 26 12:42:26 compute-0 goofy_euler[118865]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:42:26 compute-0 goofy_euler[118865]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:42:26 compute-0 goofy_euler[118865]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:42:26 compute-0 goofy_euler[118865]:         "osd_id": 2,
Nov 26 12:42:26 compute-0 goofy_euler[118865]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:42:26 compute-0 goofy_euler[118865]:         "type": "bluestore"
Nov 26 12:42:26 compute-0 goofy_euler[118865]:     },
Nov 26 12:42:26 compute-0 goofy_euler[118865]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:42:26 compute-0 goofy_euler[118865]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:42:26 compute-0 goofy_euler[118865]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:42:26 compute-0 goofy_euler[118865]:         "osd_id": 0,
Nov 26 12:42:26 compute-0 goofy_euler[118865]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:42:26 compute-0 goofy_euler[118865]:         "type": "bluestore"
Nov 26 12:42:26 compute-0 goofy_euler[118865]:     }
Nov 26 12:42:26 compute-0 goofy_euler[118865]: }
Nov 26 12:42:26 compute-0 systemd[1]: libpod-ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada.scope: Deactivated successfully.
Nov 26 12:42:26 compute-0 podman[119052]: 2025-11-26 12:42:26.060090743 +0000 UTC m=+0.018449580 container died ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 12:42:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-155894c4bb4a346524e966d2a73f8640160aadea39ecbe71ed760b4cebc3c0db-merged.mount: Deactivated successfully.
Nov 26 12:42:26 compute-0 podman[119052]: 2025-11-26 12:42:26.090099934 +0000 UTC m=+0.048458772 container remove ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:42:26 compute-0 systemd[1]: libpod-conmon-ff2b6b86a16cc028690c34cd4198e8a89d8e4d72096e71852ba473b4d6091ada.scope: Deactivated successfully.
Nov 26 12:42:26 compute-0 sudo[118584]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:42:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:42:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:42:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:42:26 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev b16754c2-ff89-4c11-b913-784d4d2be1d5 does not exist
Nov 26 12:42:26 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 0ef91a1d-aebe-4997-9bda-fbb14816b259 does not exist
Nov 26 12:42:26 compute-0 python3.9[119049]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:26 compute-0 sudo[119064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:42:26 compute-0 sudo[119064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:26 compute-0 sudo[119064]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:26 compute-0 ceph-mon[74966]: 10.e scrub starts
Nov 26 12:42:26 compute-0 ceph-mon[74966]: 10.e scrub ok
Nov 26 12:42:26 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:42:26 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:42:26 compute-0 sudo[119046]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:26 compute-0 sudo[119089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:42:26 compute-0 sudo[119089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:42:26 compute-0 sudo[119089]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:26 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 26 12:42:26 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 26 12:42:26 compute-0 sudo[119263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpfcbdtcsnujexficetkkckcmogtqjmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160946.5449574-277-141145573806308/AnsiballZ_setup.py'
Nov 26 12:42:26 compute-0 sudo[119263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:26 compute-0 python3.9[119265]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:42:27 compute-0 sudo[119263]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:27 compute-0 ceph-mon[74966]: pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:27 compute-0 ceph-mon[74966]: 8.16 scrub starts
Nov 26 12:42:27 compute-0 ceph-mon[74966]: 8.16 scrub ok
Nov 26 12:42:27 compute-0 sudo[119347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebxyhausqqruvuvxnnubpjqtsugvyzkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160946.5449574-277-141145573806308/AnsiballZ_systemd.py'
Nov 26 12:42:27 compute-0 sudo[119347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:27 compute-0 python3.9[119349]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:42:27 compute-0 sudo[119347]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:27 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:28 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 26 12:42:28 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 26 12:42:28 compute-0 sshd-session[114018]: Connection closed by 192.168.122.30 port 59648
Nov 26 12:42:28 compute-0 sshd-session[114015]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:42:28 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Nov 26 12:42:28 compute-0 systemd[1]: session-37.scope: Consumed 16.836s CPU time.
Nov 26 12:42:28 compute-0 systemd-logind[777]: Session 37 logged out. Waiting for processes to exit.
Nov 26 12:42:28 compute-0 systemd-logind[777]: Removed session 37.
Nov 26 12:42:28 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 26 12:42:28 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 26 12:42:29 compute-0 ceph-mon[74966]: pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:29 compute-0 ceph-mon[74966]: 11.1f scrub starts
Nov 26 12:42:29 compute-0 ceph-mon[74966]: 11.1f scrub ok
Nov 26 12:42:29 compute-0 ceph-mon[74966]: 8.17 scrub starts
Nov 26 12:42:29 compute-0 ceph-mon[74966]: 8.17 scrub ok
Nov 26 12:42:29 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.15 deep-scrub starts
Nov 26 12:42:29 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.15 deep-scrub ok
Nov 26 12:42:29 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:30 compute-0 ceph-mon[74966]: 11.15 deep-scrub starts
Nov 26 12:42:30 compute-0 ceph-mon[74966]: 11.15 deep-scrub ok
Nov 26 12:42:30 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 26 12:42:30 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 26 12:42:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:42:31 compute-0 ceph-mon[74966]: pgmap v253: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:31 compute-0 ceph-mon[74966]: 8.19 scrub starts
Nov 26 12:42:31 compute-0 ceph-mon[74966]: 8.19 scrub ok
Nov 26 12:42:31 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 26 12:42:31 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 26 12:42:31 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 26 12:42:31 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 26 12:42:31 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:32 compute-0 ceph-mon[74966]: 11.1a scrub starts
Nov 26 12:42:32 compute-0 ceph-mon[74966]: 11.1a scrub ok
Nov 26 12:42:32 compute-0 ceph-mon[74966]: 8.1e scrub starts
Nov 26 12:42:32 compute-0 ceph-mon[74966]: 8.1e scrub ok
Nov 26 12:42:32 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 26 12:42:32 compute-0 sshd-session[119376]: Accepted publickey for zuul from 192.168.122.30 port 34004 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:42:32 compute-0 systemd-logind[777]: New session 38 of user zuul.
Nov 26 12:42:32 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 26 12:42:32 compute-0 systemd[1]: Started Session 38 of User zuul.
Nov 26 12:42:32 compute-0 sshd-session[119376]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:42:32 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 26 12:42:32 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 26 12:42:33 compute-0 ceph-mon[74966]: pgmap v254: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:33 compute-0 ceph-mon[74966]: 9.a scrub starts
Nov 26 12:42:33 compute-0 ceph-mon[74966]: 9.a scrub ok
Nov 26 12:42:33 compute-0 sudo[119529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viobxuvidklvadwsjfsyzsleiogpgiae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160952.8750527-22-29684557703768/AnsiballZ_file.py'
Nov 26 12:42:33 compute-0 sudo[119529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:33 compute-0 python3.9[119531]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:33 compute-0 sudo[119529]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:33 compute-0 sudo[119681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgtndsqjcxesxpyzvqdkkfrmumfarxxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160953.489155-34-267282757662711/AnsiballZ_stat.py'
Nov 26 12:42:33 compute-0 sudo[119681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:33 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:33 compute-0 python3.9[119683]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:33 compute-0 sudo[119681]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:34 compute-0 sudo[119759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpdfkswycfuspzcbxhsbwlsqaqjlnfqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160953.489155-34-267282757662711/AnsiballZ_file.py'
Nov 26 12:42:34 compute-0 sudo[119759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:34 compute-0 ceph-mon[74966]: 10.1 scrub starts
Nov 26 12:42:34 compute-0 ceph-mon[74966]: 10.1 scrub ok
Nov 26 12:42:34 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 26 12:42:34 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 26 12:42:34 compute-0 python3.9[119761]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:34 compute-0 sudo[119759]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:34 compute-0 sshd-session[119379]: Connection closed by 192.168.122.30 port 34004
Nov 26 12:42:34 compute-0 sshd-session[119376]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:42:34 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Nov 26 12:42:34 compute-0 systemd[1]: session-38.scope: Consumed 1.100s CPU time.
Nov 26 12:42:34 compute-0 systemd-logind[777]: Session 38 logged out. Waiting for processes to exit.
Nov 26 12:42:34 compute-0 systemd-logind[777]: Removed session 38.
Nov 26 12:42:34 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 26 12:42:34 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 26 12:42:35 compute-0 ceph-mon[74966]: pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:35 compute-0 ceph-mon[74966]: 7.11 scrub starts
Nov 26 12:42:35 compute-0 ceph-mon[74966]: 7.11 scrub ok
Nov 26 12:42:35 compute-0 ceph-mon[74966]: 9.10 scrub starts
Nov 26 12:42:35 compute-0 ceph-mon[74966]: 9.10 scrub ok
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:42:35
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['.mgr', 'images', 'vms', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data']
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:42:35 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:42:36 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 26 12:42:36 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 26 12:42:37 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 26 12:42:37 compute-0 ceph-mon[74966]: pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:37 compute-0 ceph-mon[74966]: 9.12 scrub starts
Nov 26 12:42:37 compute-0 ceph-mon[74966]: 9.12 scrub ok
Nov 26 12:42:37 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 26 12:42:37 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:38 compute-0 ceph-mon[74966]: 7.15 scrub starts
Nov 26 12:42:38 compute-0 ceph-mon[74966]: 7.15 scrub ok
Nov 26 12:42:39 compute-0 ceph-mon[74966]: pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:39 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:40 compute-0 sshd-session[119786]: Accepted publickey for zuul from 192.168.122.30 port 34018 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:42:40 compute-0 systemd-logind[777]: New session 39 of user zuul.
Nov 26 12:42:40 compute-0 systemd[1]: Started Session 39 of User zuul.
Nov 26 12:42:40 compute-0 sshd-session[119786]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:42:40 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 26 12:42:40 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 26 12:42:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:42:40 compute-0 python3.9[119939]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:42:41 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 26 12:42:41 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 26 12:42:41 compute-0 ceph-mon[74966]: pgmap v258: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:41 compute-0 ceph-mon[74966]: 9.14 scrub starts
Nov 26 12:42:41 compute-0 ceph-mon[74966]: 9.14 scrub ok
Nov 26 12:42:41 compute-0 sudo[120093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvpaccwodlfiqfyyjjyfnfwlpbugnafp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160961.2599194-33-257250813424166/AnsiballZ_file.py'
Nov 26 12:42:41 compute-0 sudo[120093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:41 compute-0 python3.9[120095]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:41 compute-0 sudo[120093]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:41 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:42 compute-0 sudo[120268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msabtwpxckxppplbhceuqrezscmfjpvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160961.8359008-41-20288579072430/AnsiballZ_stat.py'
Nov 26 12:42:42 compute-0 sudo[120268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:42 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.6 deep-scrub starts
Nov 26 12:42:42 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.6 deep-scrub ok
Nov 26 12:42:42 compute-0 ceph-mon[74966]: 7.1c scrub starts
Nov 26 12:42:42 compute-0 ceph-mon[74966]: 7.1c scrub ok
Nov 26 12:42:42 compute-0 python3.9[120270]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:42 compute-0 sudo[120268]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:42 compute-0 sudo[120346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wynctuvvsurvahpjpocztqnfhxulgjzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160961.8359008-41-20288579072430/AnsiballZ_file.py'
Nov 26 12:42:42 compute-0 sudo[120346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:42 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 26 12:42:42 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 26 12:42:42 compute-0 python3.9[120348]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.yhz2wz8t recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:42 compute-0 sudo[120346]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:43 compute-0 sudo[120498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwvixysegyxpwgsxkqvwhekbidufliyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160962.8934388-61-197998994145/AnsiballZ_stat.py'
Nov 26 12:42:43 compute-0 sudo[120498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:43 compute-0 ceph-mon[74966]: pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:43 compute-0 ceph-mon[74966]: 9.6 deep-scrub starts
Nov 26 12:42:43 compute-0 ceph-mon[74966]: 9.6 deep-scrub ok
Nov 26 12:42:43 compute-0 ceph-mon[74966]: 9.1a scrub starts
Nov 26 12:42:43 compute-0 ceph-mon[74966]: 9.1a scrub ok
Nov 26 12:42:43 compute-0 python3.9[120500]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:43 compute-0 sudo[120498]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:43 compute-0 sudo[120576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otwotdhxkhiofnkaorhpxwgpqysvbeoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160962.8934388-61-197998994145/AnsiballZ_file.py'
Nov 26 12:42:43 compute-0 sudo[120576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:43 compute-0 python3.9[120578]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.xkgjozvv recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:43 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 26 12:42:43 compute-0 sudo[120576]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:43 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 26 12:42:43 compute-0 sudo[120728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhoybupmmhvxufkeznyyittycxywmktn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160963.6856894-74-222845954337950/AnsiballZ_file.py'
Nov 26 12:42:43 compute-0 sudo[120728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:43 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:44 compute-0 python3.9[120730]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:42:44 compute-0 sudo[120728]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:44 compute-0 ceph-mon[74966]: 11.5 scrub starts
Nov 26 12:42:44 compute-0 ceph-mon[74966]: 11.5 scrub ok
Nov 26 12:42:44 compute-0 sudo[120880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkoenvkkjknnwptimadpbsqflcsgsyqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160964.1237817-82-32404170629568/AnsiballZ_stat.py'
Nov 26 12:42:44 compute-0 sudo[120880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:44 compute-0 python3.9[120882]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:44 compute-0 sudo[120880]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:44 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 26 12:42:44 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 26 12:42:44 compute-0 sudo[120958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skncmgoxfnylbyryminueilzmegizwdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160964.1237817-82-32404170629568/AnsiballZ_file.py'
Nov 26 12:42:44 compute-0 sudo[120958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:44 compute-0 python3.9[120960]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:42:44 compute-0 sudo[120958]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:44 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 26 12:42:44 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:42:45 compute-0 sudo[121110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqgkrrpvemcqqpytoshljrqfhzdtrekw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160964.842286-82-59837741150697/AnsiballZ_stat.py'
Nov 26 12:42:45 compute-0 sudo[121110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:45 compute-0 python3.9[121112]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:45 compute-0 sudo[121110]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:45 compute-0 ceph-mon[74966]: pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:45 compute-0 ceph-mon[74966]: 11.7 scrub starts
Nov 26 12:42:45 compute-0 ceph-mon[74966]: 11.7 scrub ok
Nov 26 12:42:45 compute-0 sudo[121188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyyiuosebqgmzcvxlpoevhhbogqpipuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160964.842286-82-59837741150697/AnsiballZ_file.py'
Nov 26 12:42:45 compute-0 sudo[121188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:45 compute-0 python3.9[121190]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:42:45 compute-0 sudo[121188]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:45 compute-0 sudo[121340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epbhxpljohzxrnpvzqsyxvswecgrhvnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160965.5972445-105-193656039448415/AnsiballZ_file.py'
Nov 26 12:42:45 compute-0 sudo[121340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:45 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:45 compute-0 python3.9[121342]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:45 compute-0 sudo[121340]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:42:46 compute-0 sudo[121492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afdoufzpsfbgnazaqdyboredqueetbvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160966.0381527-113-134636353927030/AnsiballZ_stat.py'
Nov 26 12:42:46 compute-0 sudo[121492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:46 compute-0 ceph-mon[74966]: 10.1e scrub starts
Nov 26 12:42:46 compute-0 ceph-mon[74966]: 10.1e scrub ok
Nov 26 12:42:46 compute-0 python3.9[121494]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:46 compute-0 sudo[121492]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:46 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 26 12:42:46 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 26 12:42:46 compute-0 sudo[121570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhhsiwyhafyyinuaoubhrdmawgqftzwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160966.0381527-113-134636353927030/AnsiballZ_file.py'
Nov 26 12:42:46 compute-0 sudo[121570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:46 compute-0 python3.9[121572]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:46 compute-0 sudo[121570]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:46 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 26 12:42:46 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 26 12:42:46 compute-0 sudo[121722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzsplkouaoqbdsamhahnedgyfuftzgrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160966.793085-125-6581894304887/AnsiballZ_stat.py'
Nov 26 12:42:46 compute-0 sudo[121722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:47 compute-0 python3.9[121724]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:47 compute-0 sudo[121722]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:47 compute-0 ceph-mon[74966]: pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:47 compute-0 ceph-mon[74966]: 11.a scrub starts
Nov 26 12:42:47 compute-0 ceph-mon[74966]: 11.a scrub ok
Nov 26 12:42:47 compute-0 sudo[121800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwykhlhmdxpqjhfhzugkzwpkbdglgiyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160966.793085-125-6581894304887/AnsiballZ_file.py'
Nov 26 12:42:47 compute-0 sudo[121800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:47 compute-0 python3.9[121802]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:47 compute-0 sudo[121800]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:47 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 26 12:42:47 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 26 12:42:47 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 26 12:42:47 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 26 12:42:47 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:47 compute-0 sudo[121952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmdmblmyersjwujodlxqwtzwtxriualn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160967.548673-137-176839745753759/AnsiballZ_systemd.py'
Nov 26 12:42:47 compute-0 sudo[121952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:48 compute-0 python3.9[121954]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:42:48 compute-0 systemd[1]: Reloading.
Nov 26 12:42:48 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 26 12:42:48 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 26 12:42:48 compute-0 ceph-mon[74966]: 10.15 scrub starts
Nov 26 12:42:48 compute-0 ceph-mon[74966]: 10.15 scrub ok
Nov 26 12:42:48 compute-0 ceph-mon[74966]: 11.c scrub starts
Nov 26 12:42:48 compute-0 ceph-mon[74966]: 11.c scrub ok
Nov 26 12:42:48 compute-0 systemd-sysv-generator[121976]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:42:48 compute-0 systemd-rc-local-generator[121973]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:42:48 compute-0 sudo[121952]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:48 compute-0 sudo[122141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaecklmxxkkkuqnffwvjuowwjkugxdpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160968.540554-145-119460609136712/AnsiballZ_stat.py'
Nov 26 12:42:48 compute-0 sudo[122141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:48 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 26 12:42:48 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 26 12:42:49 compute-0 python3.9[122143]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:49 compute-0 sudo[122141]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:49 compute-0 sudo[122219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sajogmcmhmbvbjgpmshemuwivoymfxoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160968.540554-145-119460609136712/AnsiballZ_file.py'
Nov 26 12:42:49 compute-0 sudo[122219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:49 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 26 12:42:49 compute-0 ceph-mon[74966]: 10.9 scrub starts
Nov 26 12:42:49 compute-0 ceph-mon[74966]: 10.9 scrub ok
Nov 26 12:42:49 compute-0 ceph-mon[74966]: pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:49 compute-0 ceph-mon[74966]: 9.e scrub starts
Nov 26 12:42:49 compute-0 ceph-mon[74966]: 9.e scrub ok
Nov 26 12:42:49 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 26 12:42:49 compute-0 python3.9[122221]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:49 compute-0 sudo[122219]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:49 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.13 deep-scrub starts
Nov 26 12:42:49 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.13 deep-scrub ok
Nov 26 12:42:49 compute-0 sudo[122371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjjsinvccjzuseollbiugizycnnmygfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160969.4783316-157-106039006764736/AnsiballZ_stat.py'
Nov 26 12:42:49 compute-0 sudo[122371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:49 compute-0 python3.9[122373]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:49 compute-0 sudo[122371]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:49 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:49 compute-0 sudo[122449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpihsjwjndpnsqsahbwzapdtmkuoqgyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160969.4783316-157-106039006764736/AnsiballZ_file.py'
Nov 26 12:42:49 compute-0 sudo[122449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:50 compute-0 python3.9[122451]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:50 compute-0 sudo[122449]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:50 compute-0 ceph-mon[74966]: 10.16 scrub starts
Nov 26 12:42:50 compute-0 ceph-mon[74966]: 10.16 scrub ok
Nov 26 12:42:50 compute-0 ceph-mon[74966]: 9.17 scrub starts
Nov 26 12:42:50 compute-0 ceph-mon[74966]: 9.17 scrub ok
Nov 26 12:42:50 compute-0 ceph-mon[74966]: 11.13 deep-scrub starts
Nov 26 12:42:50 compute-0 ceph-mon[74966]: 11.13 deep-scrub ok
Nov 26 12:42:50 compute-0 sudo[122601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygkzzlbmdlcuvzmgqxkfdoqrevdvtige ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160970.2566974-169-252481316878116/AnsiballZ_systemd.py'
Nov 26 12:42:50 compute-0 sudo[122601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:50 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 26 12:42:50 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 26 12:42:50 compute-0 python3.9[122603]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:42:50 compute-0 systemd[1]: Reloading.
Nov 26 12:42:50 compute-0 systemd-rc-local-generator[122625]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:42:50 compute-0 systemd-sysv-generator[122628]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:42:50 compute-0 systemd[1]: Starting Create netns directory...
Nov 26 12:42:50 compute-0 systemd[76457]: Created slice User Background Tasks Slice.
Nov 26 12:42:50 compute-0 systemd[76457]: Starting Cleanup of User's Temporary Files and Directories...
Nov 26 12:42:50 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 12:42:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:42:50 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 12:42:50 compute-0 systemd[1]: Finished Create netns directory.
Nov 26 12:42:50 compute-0 systemd[76457]: Finished Cleanup of User's Temporary Files and Directories.
Nov 26 12:42:50 compute-0 sudo[122601]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:51 compute-0 ceph-mon[74966]: pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:51 compute-0 ceph-mon[74966]: 11.16 scrub starts
Nov 26 12:42:51 compute-0 ceph-mon[74966]: 11.16 scrub ok
Nov 26 12:42:51 compute-0 python3.9[122797]: ansible-ansible.builtin.service_facts Invoked
Nov 26 12:42:51 compute-0 network[122814]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 12:42:51 compute-0 network[122815]: 'network-scripts' will be removed from distribution in near future.
Nov 26 12:42:51 compute-0 network[122816]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 12:42:51 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:52 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 26 12:42:52 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 26 12:42:53 compute-0 ceph-mon[74966]: pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:53 compute-0 ceph-mon[74966]: 9.f scrub starts
Nov 26 12:42:53 compute-0 ceph-mon[74966]: 9.f scrub ok
Nov 26 12:42:53 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 26 12:42:53 compute-0 sudo[123076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phcctwikbbtmqxitkissjnjlhualzzpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160973.634097-195-35708547187271/AnsiballZ_stat.py'
Nov 26 12:42:53 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 26 12:42:53 compute-0 sudo[123076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:53 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:53 compute-0 python3.9[123078]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:54 compute-0 sudo[123076]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:54 compute-0 sudo[123154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjhiqkdbjqjgadsjacprpxqyuybhwocv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160973.634097-195-35708547187271/AnsiballZ_file.py'
Nov 26 12:42:54 compute-0 sudo[123154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:54 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 26 12:42:54 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 26 12:42:54 compute-0 python3.9[123156]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:54 compute-0 sudo[123154]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:54 compute-0 sudo[123306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwsezgliozwqkyotdqckzxagkahayujj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160974.4728215-208-226365124740938/AnsiballZ_file.py'
Nov 26 12:42:54 compute-0 sudo[123306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:54 compute-0 python3.9[123308]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:54 compute-0 sudo[123306]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:55 compute-0 sudo[123458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hatzmdnmjijysohhqdvmajzrgtwngaub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160974.9145827-216-103914895767988/AnsiballZ_stat.py'
Nov 26 12:42:55 compute-0 sudo[123458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:55 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 26 12:42:55 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 26 12:42:55 compute-0 python3.9[123460]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:55 compute-0 ceph-mon[74966]: 10.17 scrub starts
Nov 26 12:42:55 compute-0 ceph-mon[74966]: 10.17 scrub ok
Nov 26 12:42:55 compute-0 ceph-mon[74966]: pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:55 compute-0 ceph-mon[74966]: 9.7 scrub starts
Nov 26 12:42:55 compute-0 ceph-mon[74966]: 9.7 scrub ok
Nov 26 12:42:55 compute-0 sudo[123458]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:55 compute-0 sudo[123536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzxjopzlzijrduyctkkbzhllvshuwvdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160974.9145827-216-103914895767988/AnsiballZ_file.py'
Nov 26 12:42:55 compute-0 sudo[123536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:55 compute-0 python3.9[123538]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:55 compute-0 sudo[123536]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:55 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:42:56 compute-0 sudo[123688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asodrekwoemznuqxilpxnxkuihnlvqex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160975.7786996-231-244422676780074/AnsiballZ_timezone.py'
Nov 26 12:42:56 compute-0 sudo[123688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:56 compute-0 ceph-mon[74966]: 6.8 scrub starts
Nov 26 12:42:56 compute-0 ceph-mon[74966]: 6.8 scrub ok
Nov 26 12:42:56 compute-0 python3.9[123690]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 26 12:42:56 compute-0 systemd[1]: Starting Time & Date Service...
Nov 26 12:42:56 compute-0 systemd[1]: Started Time & Date Service.
Nov 26 12:42:56 compute-0 sudo[123688]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:56 compute-0 sudo[123844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inieujiprxzsavrlqculdrsfkwyruxbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160976.5685022-240-104049248549350/AnsiballZ_file.py'
Nov 26 12:42:56 compute-0 sudo[123844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:56 compute-0 python3.9[123846]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:56 compute-0 sudo[123844]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:57 compute-0 ceph-mon[74966]: pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:57 compute-0 sudo[123996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frisctiflreynyzbmnlyjljwzybtevyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160977.0711124-248-52373588672550/AnsiballZ_stat.py'
Nov 26 12:42:57 compute-0 sudo[123996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:57 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 26 12:42:57 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 26 12:42:57 compute-0 python3.9[123998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:57 compute-0 sudo[123996]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:57 compute-0 sudo[124074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoxaubbknjqfqusewvakhcrzeuafvmva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160977.0711124-248-52373588672550/AnsiballZ_file.py'
Nov 26 12:42:57 compute-0 sudo[124074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:57 compute-0 python3.9[124076]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:57 compute-0 sudo[124074]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:57 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:58 compute-0 sudo[124226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaknziqsmxjzfkzpucxmvmxvmrnktzsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160978.0067987-260-73919775240602/AnsiballZ_stat.py'
Nov 26 12:42:58 compute-0 sudo[124226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:58 compute-0 ceph-mon[74966]: 11.1d scrub starts
Nov 26 12:42:58 compute-0 ceph-mon[74966]: 11.1d scrub ok
Nov 26 12:42:58 compute-0 python3.9[124228]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:58 compute-0 sudo[124226]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:58 compute-0 sudo[124304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjglwazvpjrpwqdhayjtvgniwqvqpxjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160978.0067987-260-73919775240602/AnsiballZ_file.py'
Nov 26 12:42:58 compute-0 sudo[124304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:58 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 26 12:42:58 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 26 12:42:58 compute-0 python3.9[124306]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.0iu31q6i recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:58 compute-0 sudo[124304]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:59 compute-0 sudo[124456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsjplpmzivsjhshnqbzrdrnclbyxdrwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160978.916214-272-239049186072857/AnsiballZ_stat.py'
Nov 26 12:42:59 compute-0 sudo[124456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:59 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 26 12:42:59 compute-0 ceph-mon[74966]: pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:42:59 compute-0 ceph-mon[74966]: 9.11 scrub starts
Nov 26 12:42:59 compute-0 ceph-mon[74966]: 9.11 scrub ok
Nov 26 12:42:59 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 26 12:42:59 compute-0 python3.9[124458]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:42:59 compute-0 sudo[124456]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:59 compute-0 sudo[124534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-basayjektrjceseroskrnksvegwxbrwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160978.916214-272-239049186072857/AnsiballZ_file.py'
Nov 26 12:42:59 compute-0 sudo[124534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:42:59 compute-0 python3.9[124536]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:42:59 compute-0 sudo[124534]: pam_unix(sudo:session): session closed for user root
Nov 26 12:42:59 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.5 deep-scrub starts
Nov 26 12:42:59 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.5 deep-scrub ok
Nov 26 12:42:59 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:00 compute-0 sudo[124686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyorfvwciqqnijkdqslnxxqeogtanbxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160979.8274276-285-50741314362543/AnsiballZ_command.py'
Nov 26 12:43:00 compute-0 sudo[124686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:00 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 26 12:43:00 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 26 12:43:00 compute-0 ceph-mon[74966]: 9.8 scrub starts
Nov 26 12:43:00 compute-0 ceph-mon[74966]: 9.8 scrub ok
Nov 26 12:43:00 compute-0 ceph-mon[74966]: 9.5 deep-scrub starts
Nov 26 12:43:00 compute-0 ceph-mon[74966]: 9.5 deep-scrub ok
Nov 26 12:43:00 compute-0 python3.9[124688]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:43:00 compute-0 sudo[124686]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:00 compute-0 sudo[124839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpvdschdoccjhgdsjlqxmshbfipffszw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764160980.4357133-293-40043424927911/AnsiballZ_edpm_nftables_from_files.py'
Nov 26 12:43:00 compute-0 sudo[124839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:00 compute-0 python3[124841]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 12:43:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:43:00 compute-0 sudo[124839]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:01 compute-0 ceph-mon[74966]: pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:01 compute-0 ceph-mon[74966]: 9.18 scrub starts
Nov 26 12:43:01 compute-0 ceph-mon[74966]: 9.18 scrub ok
Nov 26 12:43:01 compute-0 sudo[124991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdvxdoajsmutyoctwakgddukktempmja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160981.0839558-301-214537473756625/AnsiballZ_stat.py'
Nov 26 12:43:01 compute-0 sudo[124991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:01 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 26 12:43:01 compute-0 python3.9[124993]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:43:01 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 26 12:43:01 compute-0 sudo[124991]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:01 compute-0 sudo[125069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyvzskjhmfqjvkksyanylbnsbuoipmqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160981.0839558-301-214537473756625/AnsiballZ_file.py'
Nov 26 12:43:01 compute-0 sudo[125069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:01 compute-0 python3.9[125071]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:01 compute-0 sudo[125069]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:01 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:02 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 26 12:43:02 compute-0 sudo[125221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfulvosfymjyvznnqnsflmjxafxihofa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160981.960826-313-241693654446986/AnsiballZ_stat.py'
Nov 26 12:43:02 compute-0 sudo[125221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:02 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 26 12:43:02 compute-0 ceph-mon[74966]: 6.1 scrub starts
Nov 26 12:43:02 compute-0 ceph-mon[74966]: 6.1 scrub ok
Nov 26 12:43:02 compute-0 python3.9[125223]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:43:02 compute-0 sudo[125221]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:02 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Nov 26 12:43:02 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Nov 26 12:43:02 compute-0 sudo[125299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moscpieznxrwedmpvgdzlqiqhnomcaib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160981.960826-313-241693654446986/AnsiballZ_file.py'
Nov 26 12:43:02 compute-0 sudo[125299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:02 compute-0 python3.9[125301]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:02 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 26 12:43:02 compute-0 sudo[125299]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:02 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 26 12:43:03 compute-0 sudo[125451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmidyzbpsyzvuabpxxliedazwuafmaxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160982.848465-325-48471047347389/AnsiballZ_stat.py'
Nov 26 12:43:03 compute-0 sudo[125451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:03 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 26 12:43:03 compute-0 python3.9[125453]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:43:03 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 26 12:43:03 compute-0 sudo[125451]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:03 compute-0 ceph-mon[74966]: pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:03 compute-0 ceph-mon[74966]: 9.c scrub starts
Nov 26 12:43:03 compute-0 ceph-mon[74966]: 9.c scrub ok
Nov 26 12:43:03 compute-0 ceph-mon[74966]: 10.19 deep-scrub starts
Nov 26 12:43:03 compute-0 ceph-mon[74966]: 10.19 deep-scrub ok
Nov 26 12:43:03 compute-0 ceph-mon[74966]: 9.b scrub starts
Nov 26 12:43:03 compute-0 ceph-mon[74966]: 9.b scrub ok
Nov 26 12:43:03 compute-0 sudo[125529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtnqwawkthqgptujvuhvzbjndxvycrzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160982.848465-325-48471047347389/AnsiballZ_file.py'
Nov 26 12:43:03 compute-0 sudo[125529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:03 compute-0 python3.9[125531]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:03 compute-0 sudo[125529]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:03 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 26 12:43:03 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 26 12:43:03 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:03 compute-0 sudo[125681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjjavlweihjceausrveqvolopxmlbtkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160983.7188377-337-219578316407040/AnsiballZ_stat.py'
Nov 26 12:43:03 compute-0 sudo[125681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:04 compute-0 python3.9[125683]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:43:04 compute-0 sudo[125681]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:04 compute-0 sudo[125759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvpkpnjxoysblplzjoovvhsluhtkurcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160983.7188377-337-219578316407040/AnsiballZ_file.py'
Nov 26 12:43:04 compute-0 sudo[125759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:04 compute-0 ceph-mon[74966]: 6.f scrub starts
Nov 26 12:43:04 compute-0 ceph-mon[74966]: 6.f scrub ok
Nov 26 12:43:04 compute-0 ceph-mon[74966]: 9.9 scrub starts
Nov 26 12:43:04 compute-0 ceph-mon[74966]: 9.9 scrub ok
Nov 26 12:43:04 compute-0 python3.9[125761]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:04 compute-0 sudo[125759]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:04 compute-0 sudo[125911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbyqojzzipyquxuegodzrorwexqegvzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160984.5770802-349-11751874298822/AnsiballZ_stat.py'
Nov 26 12:43:04 compute-0 sudo[125911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:04 compute-0 python3.9[125913]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:43:05 compute-0 sudo[125911]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:05 compute-0 sudo[125989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlbnavrohsiekalyddblcbfmqwtwptgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160984.5770802-349-11751874298822/AnsiballZ_file.py'
Nov 26 12:43:05 compute-0 sudo[125989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:05 compute-0 ceph-mon[74966]: pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:05 compute-0 python3.9[125991]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:05 compute-0 sudo[125989]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:05 compute-0 sudo[126141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqcsbsizgbzawtluztemgcjrshsenjin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160985.531644-362-108331915427050/AnsiballZ_command.py'
Nov 26 12:43:05 compute-0 sudo[126141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:43:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:43:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:43:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:43:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:43:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:43:05 compute-0 python3.9[126143]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:43:05 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:05 compute-0 sudo[126141]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:43:06 compute-0 sudo[126296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcmxywfzogogmilgbntddggfvmgmgmmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160986.0658557-370-120142629043263/AnsiballZ_blockinfile.py'
Nov 26 12:43:06 compute-0 sudo[126296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:06 compute-0 python3.9[126298]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:06 compute-0 sudo[126296]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:06 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 26 12:43:06 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 26 12:43:06 compute-0 sudo[126448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izhobcbmxqwoqkkaxvykcqbzgrdiatbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160986.7478905-379-185711821699847/AnsiballZ_file.py'
Nov 26 12:43:06 compute-0 sudo[126448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:07 compute-0 python3.9[126450]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:07 compute-0 sudo[126448]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:07 compute-0 ceph-mon[74966]: pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:07 compute-0 ceph-mon[74966]: 6.3 scrub starts
Nov 26 12:43:07 compute-0 ceph-mon[74966]: 6.3 scrub ok
Nov 26 12:43:07 compute-0 sudo[126600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iomwkybwxotzenceqjkihasdhaxmwfkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160987.2550385-379-182424557380129/AnsiballZ_file.py'
Nov 26 12:43:07 compute-0 sudo[126600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:07 compute-0 python3.9[126602]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:07 compute-0 sudo[126600]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:07 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:08 compute-0 sudo[126752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blhrkvzvgfmnvbosiyqqawccyrjojzkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160987.7452264-394-44483484513765/AnsiballZ_mount.py'
Nov 26 12:43:08 compute-0 sudo[126752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:08 compute-0 python3.9[126754]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 12:43:08 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 26 12:43:08 compute-0 sudo[126752]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:08 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 26 12:43:08 compute-0 sudo[126904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rogfhnprdfezvjelerxwwpgreetoytjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160988.3857548-394-213652131044776/AnsiballZ_mount.py'
Nov 26 12:43:08 compute-0 sudo[126904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:08 compute-0 python3.9[126906]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 12:43:08 compute-0 sudo[126904]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:09 compute-0 sshd-session[119789]: Connection closed by 192.168.122.30 port 34018
Nov 26 12:43:09 compute-0 sshd-session[119786]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:43:09 compute-0 systemd-logind[777]: Session 39 logged out. Waiting for processes to exit.
Nov 26 12:43:09 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Nov 26 12:43:09 compute-0 systemd[1]: session-39.scope: Consumed 21.413s CPU time.
Nov 26 12:43:09 compute-0 systemd-logind[777]: Removed session 39.
Nov 26 12:43:09 compute-0 ceph-mon[74966]: pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:09 compute-0 ceph-mon[74966]: 9.13 scrub starts
Nov 26 12:43:09 compute-0 ceph-mon[74966]: 9.13 scrub ok
Nov 26 12:43:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 26 12:43:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 26 12:43:09 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:10 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 26 12:43:10 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 26 12:43:10 compute-0 ceph-mon[74966]: 10.13 scrub starts
Nov 26 12:43:10 compute-0 ceph-mon[74966]: 10.13 scrub ok
Nov 26 12:43:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:43:11 compute-0 ceph-mon[74966]: pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:11 compute-0 ceph-mon[74966]: 9.19 scrub starts
Nov 26 12:43:11 compute-0 ceph-mon[74966]: 9.19 scrub ok
Nov 26 12:43:11 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:12 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 26 12:43:12 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 26 12:43:13 compute-0 ceph-mon[74966]: pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:13 compute-0 ceph-mon[74966]: 10.b scrub starts
Nov 26 12:43:13 compute-0 ceph-mon[74966]: 10.b scrub ok
Nov 26 12:43:13 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 26 12:43:13 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 26 12:43:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 26 12:43:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 26 12:43:13 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:14 compute-0 sshd-session[126931]: Accepted publickey for zuul from 192.168.122.30 port 57726 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:43:14 compute-0 systemd-logind[777]: New session 40 of user zuul.
Nov 26 12:43:14 compute-0 systemd[1]: Started Session 40 of User zuul.
Nov 26 12:43:14 compute-0 sshd-session[126931]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:43:14 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 26 12:43:14 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 26 12:43:14 compute-0 ceph-mon[74966]: 10.12 scrub starts
Nov 26 12:43:14 compute-0 ceph-mon[74966]: 10.12 scrub ok
Nov 26 12:43:14 compute-0 ceph-mon[74966]: 9.1 scrub starts
Nov 26 12:43:14 compute-0 ceph-mon[74966]: 9.1 scrub ok
Nov 26 12:43:14 compute-0 sudo[127084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uluxcrwtdezqphjkxgexmvhdfmxnmrvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160994.1352282-16-186759649652722/AnsiballZ_tempfile.py'
Nov 26 12:43:14 compute-0 sudo[127084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:14 compute-0 python3.9[127086]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 26 12:43:14 compute-0 sudo[127084]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:15 compute-0 sudo[127236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arliownofdnbufkecedvlbdvkkngnxsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160994.7603629-28-249587363623499/AnsiballZ_stat.py'
Nov 26 12:43:15 compute-0 sudo[127236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:15 compute-0 python3.9[127238]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:43:15 compute-0 sudo[127236]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:15 compute-0 ceph-mon[74966]: pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:15 compute-0 ceph-mon[74966]: 10.10 scrub starts
Nov 26 12:43:15 compute-0 ceph-mon[74966]: 10.10 scrub ok
Nov 26 12:43:15 compute-0 sudo[127390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pplcokvhccharunzwszzzerfwvdfbypj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160995.376612-36-247957143795710/AnsiballZ_slurp.py'
Nov 26 12:43:15 compute-0 sudo[127390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:15 compute-0 python3.9[127392]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 26 12:43:15 compute-0 sudo[127390]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:15 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:43:16 compute-0 sudo[127542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jphuuvpvmrkcfchqkdnnozcnnqhmobrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160995.9718838-44-176946474651612/AnsiballZ_stat.py'
Nov 26 12:43:16 compute-0 sudo[127542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:16 compute-0 python3.9[127544]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.fpfcw7t5 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:43:16 compute-0 ceph-mon[74966]: pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:16 compute-0 sudo[127542]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:16 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.1a deep-scrub starts
Nov 26 12:43:16 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.1a deep-scrub ok
Nov 26 12:43:16 compute-0 sudo[127667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqiybjbedschpkzwijoddfszgxyvbcuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160995.9718838-44-176946474651612/AnsiballZ_copy.py'
Nov 26 12:43:16 compute-0 sudo[127667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:16 compute-0 python3.9[127669]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.fpfcw7t5 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764160995.9718838-44-176946474651612/.source.fpfcw7t5 _original_basename=.6p_s6o99 follow=False checksum=e21e3f9e4941376571ab17089734df5da9d861b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:16 compute-0 sudo[127667]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:17 compute-0 ceph-mon[74966]: 10.1a deep-scrub starts
Nov 26 12:43:17 compute-0 ceph-mon[74966]: 10.1a deep-scrub ok
Nov 26 12:43:17 compute-0 sudo[127819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gssofufmqofwkpaejncqdsugttsmkkhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160997.0058696-59-13765036108718/AnsiballZ_setup.py'
Nov 26 12:43:17 compute-0 sudo[127819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:17 compute-0 python3.9[127821]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:43:17 compute-0 sudo[127819]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:17 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 26 12:43:17 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 26 12:43:17 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:18 compute-0 sudo[127971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcnskrkmjzxqqllpucpllwkfszorjprf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160997.9031324-68-3138128465691/AnsiballZ_blockinfile.py'
Nov 26 12:43:18 compute-0 sudo[127971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:18 compute-0 ceph-mon[74966]: pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:18 compute-0 python3.9[127973]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZE1dpxvL8OPz/VjvFsUTPfsDH6vQml5mdj02SrlFJXfQ252JoKh5fIbIe5jq+eMTBsdiCv9Uyd8xyCUarLeNlJLXFWeql+5MwT2PuY4qrfay7YgFarsvqVEneCieDB/KjZaqMenEf/yZJjvCZifypNg9Of1e8QgrIOrGdP8zeyVeSR6g7d477abOVM7jqxl1dgu5rM+rlTW4DHASE9s/qzG6qu1p1HB8ZEiKsXEtoLhomhrwcTSk94ELWY62pIn8cyapkDsX3TnUoIzQZE8wHuKD+UpY8fWfvFoKo+fdR3UnZmegzF7lylv9XeU/lSEgeDN/LggErCBVNDLBaUG54mPUhEXh3MLVnzgSeCs+DGrchncrg0mgqgKPeAPoZrH+WzFuvKCCsGBjrX8QhxkOy2Q43UXW4uIZlhuzPSsZEnqjd+oz98yWJanGeEkfPCs4nqf6Btd135JYpY2UQoryGnawaWQx/nbU9rePlzY7IbAuDaivVwT3RTKUEmoXfmis=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKuDB4s6WXjGK+4hbQXMcwUNsMga+M2cTnBcJkimQdRS
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK2PGuuGeSfke7nCSgI56m6cuyn45RHczvKouRcqVMRuIWRuDTGV0zknjmAVTtZjpkmBwAytv1rMLkBGlVHtizM=
                                              create=True mode=0644 path=/tmp/ansible.fpfcw7t5 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:18 compute-0 sudo[127971]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:18 compute-0 sudo[128123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzyhywgonazqzmdjwolmetbeicgtalng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160998.5133998-76-7331047209957/AnsiballZ_command.py'
Nov 26 12:43:18 compute-0 sudo[128123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:18 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 26 12:43:18 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 26 12:43:18 compute-0 python3.9[128125]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.fpfcw7t5' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:43:19 compute-0 sudo[128123]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:19 compute-0 ceph-mon[74966]: 9.d scrub starts
Nov 26 12:43:19 compute-0 ceph-mon[74966]: 9.d scrub ok
Nov 26 12:43:19 compute-0 sudo[128277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emnqvwxgbvklsjawmfrdgrqtryrjhkxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764160999.1357555-84-260446625875924/AnsiballZ_file.py'
Nov 26 12:43:19 compute-0 sudo[128277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:19 compute-0 python3.9[128279]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.fpfcw7t5 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:19 compute-0 sudo[128277]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:19 compute-0 sshd-session[126934]: Connection closed by 192.168.122.30 port 57726
Nov 26 12:43:19 compute-0 sshd-session[126931]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:43:19 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Nov 26 12:43:19 compute-0 systemd[1]: session-40.scope: Consumed 4.041s CPU time.
Nov 26 12:43:19 compute-0 systemd-logind[777]: Session 40 logged out. Waiting for processes to exit.
Nov 26 12:43:19 compute-0 systemd-logind[777]: Removed session 40.
Nov 26 12:43:19 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:20 compute-0 ceph-mon[74966]: 9.3 scrub starts
Nov 26 12:43:20 compute-0 ceph-mon[74966]: 9.3 scrub ok
Nov 26 12:43:20 compute-0 ceph-mon[74966]: pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:43:21 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 26 12:43:21 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 26 12:43:21 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:22 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1b deep-scrub starts
Nov 26 12:43:22 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1b deep-scrub ok
Nov 26 12:43:22 compute-0 ceph-mon[74966]: 10.f scrub starts
Nov 26 12:43:22 compute-0 ceph-mon[74966]: 10.f scrub ok
Nov 26 12:43:22 compute-0 ceph-mon[74966]: pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:23 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 26 12:43:23 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 26 12:43:23 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 26 12:43:23 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 26 12:43:23 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:23 compute-0 ceph-mon[74966]: 9.1b deep-scrub starts
Nov 26 12:43:23 compute-0 ceph-mon[74966]: 9.1b deep-scrub ok
Nov 26 12:43:25 compute-0 ceph-mon[74966]: 10.11 scrub starts
Nov 26 12:43:25 compute-0 ceph-mon[74966]: 10.11 scrub ok
Nov 26 12:43:25 compute-0 ceph-mon[74966]: 6.7 scrub starts
Nov 26 12:43:25 compute-0 ceph-mon[74966]: 6.7 scrub ok
Nov 26 12:43:25 compute-0 ceph-mon[74966]: pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:25 compute-0 sshd-session[128305]: Accepted publickey for zuul from 192.168.122.30 port 60622 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:43:25 compute-0 systemd-logind[777]: New session 41 of user zuul.
Nov 26 12:43:25 compute-0 systemd[1]: Started Session 41 of User zuul.
Nov 26 12:43:25 compute-0 sshd-session[128305]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:43:25 compute-0 python3.9[128458]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:43:25 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:43:26 compute-0 sudo[128531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:43:26 compute-0 sudo[128531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:26 compute-0 sudo[128531]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:26 compute-0 sudo[128564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:43:26 compute-0 sudo[128564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:26 compute-0 sudo[128564]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:26 compute-0 sudo[128589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:43:26 compute-0 sudo[128589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:26 compute-0 sudo[128589]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:26 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 26 12:43:26 compute-0 sudo[128614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:43:26 compute-0 sudo[128614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:26 compute-0 sudo[128726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onybooyvobgxpxwmhupwubkuabtlabzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161006.189827-32-87489901119117/AnsiballZ_systemd.py'
Nov 26 12:43:26 compute-0 sudo[128726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:26 compute-0 sudo[128614]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:43:26 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:43:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:43:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:43:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:43:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:43:26 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 6d67bc07-ad0b-42a8-974c-6a91946468d7 does not exist
Nov 26 12:43:26 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev f5b11044-1555-48f8-9e0b-2a5ee90e629a does not exist
Nov 26 12:43:26 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 485e85eb-d0e6-40ad-9c32-e7893136cc13 does not exist
Nov 26 12:43:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:43:26 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:43:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:43:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:43:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:43:26 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:43:26 compute-0 sudo[128745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:43:26 compute-0 sudo[128745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:26 compute-0 sudo[128745]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:26 compute-0 sudo[128770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:43:26 compute-0 sudo[128770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:26 compute-0 sudo[128770]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:26 compute-0 python3.9[128730]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 26 12:43:27 compute-0 sudo[128795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:43:27 compute-0 ceph-mon[74966]: pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:43:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:43:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:43:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:43:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:43:27 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:43:27 compute-0 sudo[128795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:27 compute-0 sudo[128795]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:27 compute-0 sudo[128726]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:27 compute-0 sudo[128822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:43:27 compute-0 sudo[128822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:27 compute-0 podman[128994]: 2025-11-26 12:43:27.380217197 +0000 UTC m=+0.037285450 container create 392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_moore, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:43:27 compute-0 systemd[1]: Started libpod-conmon-392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480.scope.
Nov 26 12:43:27 compute-0 sudo[129040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lapykvkrgfchslmrkcobqegvtjnsrfkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161007.1650023-40-203805912659137/AnsiballZ_systemd.py'
Nov 26 12:43:27 compute-0 sudo[129040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:43:27 compute-0 podman[128994]: 2025-11-26 12:43:27.457653805 +0000 UTC m=+0.114722079 container init 392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:43:27 compute-0 podman[128994]: 2025-11-26 12:43:27.364490423 +0000 UTC m=+0.021558697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:43:27 compute-0 podman[128994]: 2025-11-26 12:43:27.463684031 +0000 UTC m=+0.120752284 container start 392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:43:27 compute-0 podman[128994]: 2025-11-26 12:43:27.464956033 +0000 UTC m=+0.122024286 container attach 392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_moore, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:43:27 compute-0 vigilant_moore[129042]: 167 167
Nov 26 12:43:27 compute-0 systemd[1]: libpod-392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480.scope: Deactivated successfully.
Nov 26 12:43:27 compute-0 podman[128994]: 2025-11-26 12:43:27.471175636 +0000 UTC m=+0.128243890 container died 392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 12:43:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa2cc98a9ea91c0952ec76633d30c3fc2e0b7fd3b1a492ab9a985cd58b7c650e-merged.mount: Deactivated successfully.
Nov 26 12:43:27 compute-0 podman[128994]: 2025-11-26 12:43:27.493868292 +0000 UTC m=+0.150936546 container remove 392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:43:27 compute-0 systemd[1]: libpod-conmon-392a8ada285259176f445484e8b9895b7aedd91b6ce4b3b02f63027b5df7e480.scope: Deactivated successfully.
Nov 26 12:43:27 compute-0 podman[129065]: 2025-11-26 12:43:27.630494818 +0000 UTC m=+0.038464537 container create 474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:43:27 compute-0 systemd[1]: Started libpod-conmon-474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b.scope.
Nov 26 12:43:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cfc1eed2a1e48d4d866c57282888f1a311b3820d75dc85f8c6b8d498c6863b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cfc1eed2a1e48d4d866c57282888f1a311b3820d75dc85f8c6b8d498c6863b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cfc1eed2a1e48d4d866c57282888f1a311b3820d75dc85f8c6b8d498c6863b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cfc1eed2a1e48d4d866c57282888f1a311b3820d75dc85f8c6b8d498c6863b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cfc1eed2a1e48d4d866c57282888f1a311b3820d75dc85f8c6b8d498c6863b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:43:27 compute-0 podman[129065]: 2025-11-26 12:43:27.700548267 +0000 UTC m=+0.108517996 container init 474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 12:43:27 compute-0 python3.9[129044]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:43:27 compute-0 podman[129065]: 2025-11-26 12:43:27.708892942 +0000 UTC m=+0.116862661 container start 474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 12:43:27 compute-0 podman[129065]: 2025-11-26 12:43:27.614504907 +0000 UTC m=+0.022474647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:43:27 compute-0 podman[129065]: 2025-11-26 12:43:27.71050726 +0000 UTC m=+0.118476969 container attach 474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 12:43:27 compute-0 sudo[129040]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:27 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:28 compute-0 sudo[129233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctlusxxfveaxrdfdudkaijdfekzcqgff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161007.9149618-49-29017551843261/AnsiballZ_command.py'
Nov 26 12:43:28 compute-0 sudo[129233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:28 compute-0 python3.9[129235]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:43:28 compute-0 sudo[129233]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:28 compute-0 festive_wu[129078]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:43:28 compute-0 festive_wu[129078]: --> relative data size: 1.0
Nov 26 12:43:28 compute-0 festive_wu[129078]: --> All data devices are unavailable
Nov 26 12:43:28 compute-0 systemd[1]: libpod-474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b.scope: Deactivated successfully.
Nov 26 12:43:28 compute-0 podman[129065]: 2025-11-26 12:43:28.660713655 +0000 UTC m=+1.068683375 container died 474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 12:43:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-37cfc1eed2a1e48d4d866c57282888f1a311b3820d75dc85f8c6b8d498c6863b-merged.mount: Deactivated successfully.
Nov 26 12:43:28 compute-0 podman[129065]: 2025-11-26 12:43:28.701658284 +0000 UTC m=+1.109628003 container remove 474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wu, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:43:28 compute-0 systemd[1]: libpod-conmon-474a815f9128ca5b6e3568ea5d904e28145ad9b183925b92790d8c627c98418b.scope: Deactivated successfully.
Nov 26 12:43:28 compute-0 sudo[128822]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:28 compute-0 sudo[129347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:43:28 compute-0 sudo[129347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:28 compute-0 sudo[129347]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:28 compute-0 sudo[129395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:43:28 compute-0 sudo[129395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:28 compute-0 sudo[129395]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:28 compute-0 sudo[129443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:43:28 compute-0 sudo[129443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:28 compute-0 sudo[129443]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:28 compute-0 sudo[129495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmkbfsyseybjwmcuvchazkdivwydxztj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161008.5908895-57-221285230017338/AnsiballZ_stat.py'
Nov 26 12:43:28 compute-0 sudo[129495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:28 compute-0 sudo[129496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:43:28 compute-0 sudo[129496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:29 compute-0 ceph-mon[74966]: pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:29 compute-0 python3.9[129504]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:43:29 compute-0 sudo[129495]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:29 compute-0 podman[129578]: 2025-11-26 12:43:29.275896879 +0000 UTC m=+0.041255787 container create b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:43:29 compute-0 systemd[1]: Started libpod-conmon-b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08.scope.
Nov 26 12:43:29 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:43:29 compute-0 podman[129578]: 2025-11-26 12:43:29.341858763 +0000 UTC m=+0.107217680 container init b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:43:29 compute-0 podman[129578]: 2025-11-26 12:43:29.348833713 +0000 UTC m=+0.114192620 container start b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:43:29 compute-0 podman[129578]: 2025-11-26 12:43:29.350313958 +0000 UTC m=+0.115672885 container attach b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:43:29 compute-0 kind_goodall[129622]: 167 167
Nov 26 12:43:29 compute-0 systemd[1]: libpod-b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08.scope: Deactivated successfully.
Nov 26 12:43:29 compute-0 podman[129578]: 2025-11-26 12:43:29.355037335 +0000 UTC m=+0.120396372 container died b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:43:29 compute-0 podman[129578]: 2025-11-26 12:43:29.261561853 +0000 UTC m=+0.026920781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:43:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a48673d1afcfc1f3dc805d8234ac4e4c5e4c7863405f18625e01766aa2192188-merged.mount: Deactivated successfully.
Nov 26 12:43:29 compute-0 podman[129578]: 2025-11-26 12:43:29.375467019 +0000 UTC m=+0.140825927 container remove b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:43:29 compute-0 systemd[1]: libpod-conmon-b76808b9366f5a2ce9e11456feff06e849226795f0846db59df01a471566ac08.scope: Deactivated successfully.
Nov 26 12:43:29 compute-0 podman[129665]: 2025-11-26 12:43:29.517469246 +0000 UTC m=+0.041759838 container create cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 12:43:29 compute-0 systemd[1]: Started libpod-conmon-cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4.scope.
Nov 26 12:43:29 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632d4d98ba05ebe79adc357b363abffd07accda6e710fb9069312887d1d3cf22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632d4d98ba05ebe79adc357b363abffd07accda6e710fb9069312887d1d3cf22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632d4d98ba05ebe79adc357b363abffd07accda6e710fb9069312887d1d3cf22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632d4d98ba05ebe79adc357b363abffd07accda6e710fb9069312887d1d3cf22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:43:29 compute-0 podman[129665]: 2025-11-26 12:43:29.58391914 +0000 UTC m=+0.108209754 container init cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:43:29 compute-0 podman[129665]: 2025-11-26 12:43:29.590879992 +0000 UTC m=+0.115170595 container start cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 12:43:29 compute-0 podman[129665]: 2025-11-26 12:43:29.592551649 +0000 UTC m=+0.116842242 container attach cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 12:43:29 compute-0 podman[129665]: 2025-11-26 12:43:29.501887795 +0000 UTC m=+0.026178408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:43:29 compute-0 sudo[129756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mogykjhyykmroqjbgrhuripqdgoosoae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161009.2843869-66-255736647660531/AnsiballZ_file.py'
Nov 26 12:43:29 compute-0 sudo[129756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:29 compute-0 python3.9[129758]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:29 compute-0 sudo[129756]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:29 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:30 compute-0 sshd-session[128308]: Connection closed by 192.168.122.30 port 60622
Nov 26 12:43:30 compute-0 sshd-session[128305]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:43:30 compute-0 systemd-logind[777]: Session 41 logged out. Waiting for processes to exit.
Nov 26 12:43:30 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Nov 26 12:43:30 compute-0 systemd[1]: session-41.scope: Consumed 3.398s CPU time.
Nov 26 12:43:30 compute-0 systemd-logind[777]: Removed session 41.
Nov 26 12:43:30 compute-0 elated_hertz[129701]: {
Nov 26 12:43:30 compute-0 elated_hertz[129701]:     "0": [
Nov 26 12:43:30 compute-0 elated_hertz[129701]:         {
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "devices": [
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "/dev/loop3"
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             ],
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "lv_name": "ceph_lv0",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "lv_size": "21470642176",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "name": "ceph_lv0",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "tags": {
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.cluster_name": "ceph",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.crush_device_class": "",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.encrypted": "0",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.osd_id": "0",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.type": "block",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.vdo": "0"
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             },
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "type": "block",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "vg_name": "ceph_vg0"
Nov 26 12:43:30 compute-0 elated_hertz[129701]:         }
Nov 26 12:43:30 compute-0 elated_hertz[129701]:     ],
Nov 26 12:43:30 compute-0 elated_hertz[129701]:     "1": [
Nov 26 12:43:30 compute-0 elated_hertz[129701]:         {
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "devices": [
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "/dev/loop4"
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             ],
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "lv_name": "ceph_lv1",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "lv_size": "21470642176",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "name": "ceph_lv1",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "tags": {
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.cluster_name": "ceph",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.crush_device_class": "",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.encrypted": "0",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.osd_id": "1",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.type": "block",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.vdo": "0"
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             },
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "type": "block",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "vg_name": "ceph_vg1"
Nov 26 12:43:30 compute-0 elated_hertz[129701]:         }
Nov 26 12:43:30 compute-0 elated_hertz[129701]:     ],
Nov 26 12:43:30 compute-0 elated_hertz[129701]:     "2": [
Nov 26 12:43:30 compute-0 elated_hertz[129701]:         {
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "devices": [
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "/dev/loop5"
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             ],
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "lv_name": "ceph_lv2",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "lv_size": "21470642176",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "name": "ceph_lv2",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "tags": {
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.cluster_name": "ceph",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.crush_device_class": "",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.encrypted": "0",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.osd_id": "2",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.type": "block",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:                 "ceph.vdo": "0"
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             },
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "type": "block",
Nov 26 12:43:30 compute-0 elated_hertz[129701]:             "vg_name": "ceph_vg2"
Nov 26 12:43:30 compute-0 elated_hertz[129701]:         }
Nov 26 12:43:30 compute-0 elated_hertz[129701]:     ]
Nov 26 12:43:30 compute-0 elated_hertz[129701]: }
Nov 26 12:43:30 compute-0 systemd[1]: libpod-cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4.scope: Deactivated successfully.
Nov 26 12:43:30 compute-0 podman[129665]: 2025-11-26 12:43:30.29440672 +0000 UTC m=+0.818697332 container died cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:43:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-632d4d98ba05ebe79adc357b363abffd07accda6e710fb9069312887d1d3cf22-merged.mount: Deactivated successfully.
Nov 26 12:43:30 compute-0 podman[129665]: 2025-11-26 12:43:30.350406503 +0000 UTC m=+0.874697097 container remove cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 12:43:30 compute-0 systemd[1]: libpod-conmon-cbe1b60a6860655472e77914e8eaba09dccd038e3e28036ba7542d4704cf74a4.scope: Deactivated successfully.
Nov 26 12:43:30 compute-0 sudo[129496]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:30 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 26 12:43:30 compute-0 sudo[129797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:43:30 compute-0 sudo[129797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:30 compute-0 sudo[129797]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:30 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 26 12:43:30 compute-0 sudo[129822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:43:30 compute-0 sudo[129822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:30 compute-0 sudo[129822]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:30 compute-0 sudo[129847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:43:30 compute-0 sudo[129847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:30 compute-0 sudo[129847]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:30 compute-0 sudo[129872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:43:30 compute-0 sudo[129872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:30 compute-0 podman[129928]: 2025-11-26 12:43:30.875229156 +0000 UTC m=+0.029483879 container create 7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 12:43:30 compute-0 systemd[1]: Started libpod-conmon-7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95.scope.
Nov 26 12:43:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:43:30 compute-0 podman[129928]: 2025-11-26 12:43:30.936482189 +0000 UTC m=+0.090736912 container init 7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:43:30 compute-0 podman[129928]: 2025-11-26 12:43:30.943118519 +0000 UTC m=+0.097373243 container start 7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 12:43:30 compute-0 podman[129928]: 2025-11-26 12:43:30.944916063 +0000 UTC m=+0.099170786 container attach 7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 26 12:43:30 compute-0 boring_fermi[129942]: 167 167
Nov 26 12:43:30 compute-0 systemd[1]: libpod-7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95.scope: Deactivated successfully.
Nov 26 12:43:30 compute-0 podman[129928]: 2025-11-26 12:43:30.947749383 +0000 UTC m=+0.102004106 container died 7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:43:30 compute-0 podman[129928]: 2025-11-26 12:43:30.863044611 +0000 UTC m=+0.017299344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:43:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5e8f63d3522eeb4fe993e71fdcf687824e183d8e280afbfb81c7ea572acd06a-merged.mount: Deactivated successfully.
Nov 26 12:43:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:43:30 compute-0 podman[129928]: 2025-11-26 12:43:30.970460534 +0000 UTC m=+0.124715256 container remove 7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:43:30 compute-0 systemd[1]: libpod-conmon-7d949b9009a138dca3ae67a29befc53d37f69d06dfe794404b7a9c767d73aa95.scope: Deactivated successfully.
Nov 26 12:43:31 compute-0 ceph-mon[74966]: pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:31 compute-0 podman[129964]: 2025-11-26 12:43:31.096052695 +0000 UTC m=+0.032948361 container create babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 12:43:31 compute-0 systemd[1]: Started libpod-conmon-babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d.scope.
Nov 26 12:43:31 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd03fdb7cc3da78a0d56168a54ad340d72d9b00a48003941fb1bd5f8ea322fdb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd03fdb7cc3da78a0d56168a54ad340d72d9b00a48003941fb1bd5f8ea322fdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd03fdb7cc3da78a0d56168a54ad340d72d9b00a48003941fb1bd5f8ea322fdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd03fdb7cc3da78a0d56168a54ad340d72d9b00a48003941fb1bd5f8ea322fdb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:43:31 compute-0 podman[129964]: 2025-11-26 12:43:31.165288219 +0000 UTC m=+0.102183885 container init babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 12:43:31 compute-0 podman[129964]: 2025-11-26 12:43:31.171341048 +0000 UTC m=+0.108236713 container start babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 12:43:31 compute-0 podman[129964]: 2025-11-26 12:43:31.17520256 +0000 UTC m=+0.112098225 container attach babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:43:31 compute-0 podman[129964]: 2025-11-26 12:43:31.08300488 +0000 UTC m=+0.019900555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:43:31 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 26 12:43:31 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 26 12:43:31 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:31 compute-0 musing_lewin[129977]: {
Nov 26 12:43:31 compute-0 musing_lewin[129977]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:43:31 compute-0 musing_lewin[129977]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:43:31 compute-0 musing_lewin[129977]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:43:31 compute-0 musing_lewin[129977]:         "osd_id": 1,
Nov 26 12:43:31 compute-0 musing_lewin[129977]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:43:31 compute-0 musing_lewin[129977]:         "type": "bluestore"
Nov 26 12:43:31 compute-0 musing_lewin[129977]:     },
Nov 26 12:43:31 compute-0 musing_lewin[129977]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:43:31 compute-0 musing_lewin[129977]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:43:31 compute-0 musing_lewin[129977]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:43:31 compute-0 musing_lewin[129977]:         "osd_id": 2,
Nov 26 12:43:31 compute-0 musing_lewin[129977]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:43:31 compute-0 musing_lewin[129977]:         "type": "bluestore"
Nov 26 12:43:31 compute-0 musing_lewin[129977]:     },
Nov 26 12:43:31 compute-0 musing_lewin[129977]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:43:31 compute-0 musing_lewin[129977]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:43:31 compute-0 musing_lewin[129977]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:43:31 compute-0 musing_lewin[129977]:         "osd_id": 0,
Nov 26 12:43:31 compute-0 musing_lewin[129977]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:43:31 compute-0 musing_lewin[129977]:         "type": "bluestore"
Nov 26 12:43:31 compute-0 musing_lewin[129977]:     }
Nov 26 12:43:31 compute-0 musing_lewin[129977]: }
Nov 26 12:43:32 compute-0 systemd[1]: libpod-babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d.scope: Deactivated successfully.
Nov 26 12:43:32 compute-0 podman[129964]: 2025-11-26 12:43:32.007448182 +0000 UTC m=+0.944343867 container died babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:43:32 compute-0 ceph-mon[74966]: 10.2 scrub starts
Nov 26 12:43:32 compute-0 ceph-mon[74966]: 10.2 scrub ok
Nov 26 12:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd03fdb7cc3da78a0d56168a54ad340d72d9b00a48003941fb1bd5f8ea322fdb-merged.mount: Deactivated successfully.
Nov 26 12:43:32 compute-0 podman[129964]: 2025-11-26 12:43:32.050614203 +0000 UTC m=+0.987509859 container remove babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:43:32 compute-0 systemd[1]: libpod-conmon-babcb8c03402b256f988bb85e9fd6c5f17738a8a354f9e7163c6363a44836a2d.scope: Deactivated successfully.
Nov 26 12:43:32 compute-0 sudo[129872]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:32 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:43:32 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:43:32 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:43:32 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:43:32 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 88dbb656-5756-4fff-b2f7-51e3636b0e2f does not exist
Nov 26 12:43:32 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 7e66d59c-1045-4528-959c-c6ffa033dcbf does not exist
Nov 26 12:43:32 compute-0 sudo[130021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:43:32 compute-0 sudo[130021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:32 compute-0 sudo[130021]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:32 compute-0 sudo[130046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:43:32 compute-0 sudo[130046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:43:32 compute-0 sudo[130046]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:33 compute-0 ceph-mon[74966]: 10.6 scrub starts
Nov 26 12:43:33 compute-0 ceph-mon[74966]: 10.6 scrub ok
Nov 26 12:43:33 compute-0 ceph-mon[74966]: pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:33 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:43:33 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:43:33 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1d deep-scrub starts
Nov 26 12:43:33 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1d deep-scrub ok
Nov 26 12:43:33 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:34 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 26 12:43:34 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 26 12:43:35 compute-0 ceph-mon[74966]: 9.1d deep-scrub starts
Nov 26 12:43:35 compute-0 ceph-mon[74966]: 9.1d deep-scrub ok
Nov 26 12:43:35 compute-0 ceph-mon[74966]: pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:35 compute-0 sshd-session[130071]: Accepted publickey for zuul from 192.168.122.30 port 50176 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:43:35 compute-0 systemd-logind[777]: New session 42 of user zuul.
Nov 26 12:43:35 compute-0 systemd[1]: Started Session 42 of User zuul.
Nov 26 12:43:35 compute-0 sshd-session[130071]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:43:35
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'vms', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'backups']
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:43:35 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:43:36 compute-0 ceph-mon[74966]: 6.5 scrub starts
Nov 26 12:43:36 compute-0 ceph-mon[74966]: 6.5 scrub ok
Nov 26 12:43:36 compute-0 python3.9[130224]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:43:36 compute-0 sudo[130378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onrfbztzmquwmayqlshhtypavbsnenzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161016.492851-34-215604321424629/AnsiballZ_setup.py'
Nov 26 12:43:36 compute-0 sudo[130378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:36 compute-0 python3.9[130380]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:43:37 compute-0 ceph-mon[74966]: pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:37 compute-0 sudo[130378]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:37 compute-0 sudo[130462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okfvrdpynoyluzhzccyisyoxmveyhgdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161016.492851-34-215604321424629/AnsiballZ_dnf.py'
Nov 26 12:43:37 compute-0 sudo[130462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:37 compute-0 python3.9[130464]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 12:43:37 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:38 compute-0 sudo[130462]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:39 compute-0 ceph-mon[74966]: pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:39 compute-0 python3.9[130615]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:43:39 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 26 12:43:39 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 26 12:43:39 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 26 12:43:39 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 26 12:43:39 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:40 compute-0 python3.9[130766]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 12:43:40 compute-0 python3.9[130916]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:43:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:43:41 compute-0 python3.9[131066]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:43:41 compute-0 ceph-mon[74966]: 10.14 scrub starts
Nov 26 12:43:41 compute-0 ceph-mon[74966]: 10.14 scrub ok
Nov 26 12:43:41 compute-0 ceph-mon[74966]: 6.9 scrub starts
Nov 26 12:43:41 compute-0 ceph-mon[74966]: 6.9 scrub ok
Nov 26 12:43:41 compute-0 ceph-mon[74966]: pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:41 compute-0 sshd-session[130074]: Connection closed by 192.168.122.30 port 50176
Nov 26 12:43:41 compute-0 sshd-session[130071]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:43:41 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Nov 26 12:43:41 compute-0 systemd[1]: session-42.scope: Consumed 4.556s CPU time.
Nov 26 12:43:41 compute-0 systemd-logind[777]: Session 42 logged out. Waiting for processes to exit.
Nov 26 12:43:41 compute-0 systemd-logind[777]: Removed session 42.
Nov 26 12:43:41 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:42 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 26 12:43:42 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 26 12:43:43 compute-0 ceph-mon[74966]: pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:43 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:44 compute-0 ceph-mon[74966]: 6.2 scrub starts
Nov 26 12:43:44 compute-0 ceph-mon[74966]: 6.2 scrub ok
Nov 26 12:43:44 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 26 12:43:44 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:43:45 compute-0 ceph-mon[74966]: pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:45 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 26 12:43:45 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 26 12:43:45 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:43:46 compute-0 ceph-mon[74966]: 6.a scrub starts
Nov 26 12:43:46 compute-0 ceph-mon[74966]: 6.a scrub ok
Nov 26 12:43:46 compute-0 sshd-session[131091]: Accepted publickey for zuul from 192.168.122.30 port 42442 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:43:46 compute-0 systemd-logind[777]: New session 43 of user zuul.
Nov 26 12:43:46 compute-0 systemd[1]: Started Session 43 of User zuul.
Nov 26 12:43:46 compute-0 sshd-session[131091]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:43:47 compute-0 ceph-mon[74966]: 9.16 scrub starts
Nov 26 12:43:47 compute-0 ceph-mon[74966]: 9.16 scrub ok
Nov 26 12:43:47 compute-0 ceph-mon[74966]: pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:47 compute-0 python3.9[131244]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:43:47 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 26 12:43:47 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 26 12:43:47 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:48 compute-0 sudo[131398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tagndvimbiijlfjvwssnlhpqnbpvisqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161027.9720004-50-18304890669932/AnsiballZ_file.py'
Nov 26 12:43:48 compute-0 sudo[131398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:48 compute-0 python3.9[131400]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:43:48 compute-0 sudo[131398]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:48 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 26 12:43:48 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 26 12:43:48 compute-0 sudo[131550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nozkpgdmqejztgafvsyzzuaomzskuzch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161028.5658865-50-99773057825403/AnsiballZ_file.py'
Nov 26 12:43:48 compute-0 sudo[131550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:48 compute-0 python3.9[131552]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:43:48 compute-0 sudo[131550]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:49 compute-0 ceph-mon[74966]: 9.1c scrub starts
Nov 26 12:43:49 compute-0 ceph-mon[74966]: 9.1c scrub ok
Nov 26 12:43:49 compute-0 ceph-mon[74966]: pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:49 compute-0 ceph-mon[74966]: 6.6 scrub starts
Nov 26 12:43:49 compute-0 ceph-mon[74966]: 6.6 scrub ok
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.137930) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161029137998, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7253, "num_deletes": 251, "total_data_size": 9480536, "memory_usage": 9717632, "flush_reason": "Manual Compaction"}
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161029152543, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7583249, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 134, "largest_seqno": 7384, "table_properties": {"data_size": 7556632, "index_size": 17222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 77306, "raw_average_key_size": 23, "raw_value_size": 7493312, "raw_average_value_size": 2264, "num_data_blocks": 756, "num_entries": 3309, "num_filter_entries": 3309, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160615, "oldest_key_time": 1764160615, "file_creation_time": 1764161029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 14662 microseconds, and 12090 cpu microseconds.
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.152593) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7583249 bytes OK
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.152618) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.152962) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.152978) EVENT_LOG_v1 {"time_micros": 1764161029152974, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.153003) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9448810, prev total WAL file size 9448810, number of live WAL files 2.
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.154565) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7405KB) 13(52KB) 8(1944B)]
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161029154662, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7639159, "oldest_snapshot_seqno": -1}
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3123 keys, 7595185 bytes, temperature: kUnknown
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161029170038, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7595185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7569038, "index_size": 17269, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 75337, "raw_average_key_size": 24, "raw_value_size": 7507317, "raw_average_value_size": 2403, "num_data_blocks": 760, "num_entries": 3123, "num_filter_entries": 3123, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160613, "oldest_key_time": 0, "file_creation_time": 1764161029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.170180) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7595185 bytes
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.170507) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 495.7 rd, 492.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.3, 0.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3412, records dropped: 289 output_compression: NoCompression
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.170522) EVENT_LOG_v1 {"time_micros": 1764161029170514, "job": 4, "event": "compaction_finished", "compaction_time_micros": 15410, "compaction_time_cpu_micros": 12857, "output_level": 6, "num_output_files": 1, "total_output_size": 7595185, "num_input_records": 3412, "num_output_records": 3123, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161029171369, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161029171425, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161029171455, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 26 12:43:49 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:43:49.154495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:43:49 compute-0 sudo[131703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikfyqgrtajwypmqyvcoyyjheurxgqcoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161029.0472236-65-152336084713964/AnsiballZ_stat.py'
Nov 26 12:43:49 compute-0 sudo[131703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:49 compute-0 python3.9[131705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:43:49 compute-0 sudo[131703]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:49 compute-0 sudo[131826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnugntnwpdrfqcmxucorptiufjgayemy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161029.0472236-65-152336084713964/AnsiballZ_copy.py'
Nov 26 12:43:49 compute-0 sudo[131826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:49 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:50 compute-0 python3.9[131828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161029.0472236-65-152336084713964/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=341ca2fc409c9190c99d327bf21634777e517827 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:50 compute-0 sudo[131826]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:50 compute-0 sudo[131978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaxvardwwnfbzegbnvuszzgcgdpadmci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161030.1742167-65-214120028910979/AnsiballZ_stat.py'
Nov 26 12:43:50 compute-0 sudo[131978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:50 compute-0 python3.9[131980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:43:50 compute-0 sudo[131978]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:50 compute-0 sudo[132101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsnjawvjzwdqjxggzkzlfhbfmcmvwnus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161030.1742167-65-214120028910979/AnsiballZ_copy.py'
Nov 26 12:43:50 compute-0 sudo[132101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:50 compute-0 python3.9[132103]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161030.1742167-65-214120028910979/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=69e5ba039761a8ef5a94c218b10e6621452398f8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:50 compute-0 sudo[132101]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:43:51 compute-0 ceph-mon[74966]: pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:51 compute-0 sudo[132253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teiswgiyxijgczrfuhecxeqyxbfjatlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161031.0117486-65-29109148638256/AnsiballZ_stat.py'
Nov 26 12:43:51 compute-0 sudo[132253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:51 compute-0 python3.9[132255]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:43:51 compute-0 sudo[132253]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:51 compute-0 sudo[132376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnpoekcbitjfkjqxtfuitwacjcbykgsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161031.0117486-65-29109148638256/AnsiballZ_copy.py'
Nov 26 12:43:51 compute-0 sudo[132376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:51 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 26 12:43:51 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 26 12:43:51 compute-0 python3.9[132378]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161031.0117486-65-29109148638256/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e107e0dc1f5999b737bd6fda13616c3460e5af4c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:51 compute-0 sudo[132376]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:51 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:52 compute-0 sudo[132528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xadfdjfmeshebrgorlgjnkzpnwusauwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161031.9060633-109-72440018194770/AnsiballZ_file.py'
Nov 26 12:43:52 compute-0 sudo[132528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:52 compute-0 ceph-mon[74966]: 6.e scrub starts
Nov 26 12:43:52 compute-0 ceph-mon[74966]: 6.e scrub ok
Nov 26 12:43:52 compute-0 python3.9[132530]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:43:52 compute-0 sudo[132528]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:52 compute-0 sudo[132680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avunqzaehzfakodthrzzufeowrdrbhks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161032.3782933-109-52347383360734/AnsiballZ_file.py'
Nov 26 12:43:52 compute-0 sudo[132680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:52 compute-0 python3.9[132682]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:43:52 compute-0 sudo[132680]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:53 compute-0 sudo[132832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpvvpsrnupzpjgnfcdlmvpjpxezvlpbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161032.8567283-124-154011641634611/AnsiballZ_stat.py'
Nov 26 12:43:53 compute-0 sudo[132832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:53 compute-0 ceph-mon[74966]: pgmap v294: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:53 compute-0 python3.9[132834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:43:53 compute-0 sudo[132832]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:53 compute-0 sudo[132955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wstmozgplriixsovtbwwtsktvavwtrwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161032.8567283-124-154011641634611/AnsiballZ_copy.py'
Nov 26 12:43:53 compute-0 sudo[132955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:53 compute-0 python3.9[132957]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161032.8567283-124-154011641634611/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=2d7354a1447831a9eecceecd082a82ae74e08486 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:53 compute-0 sudo[132955]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:53 compute-0 sudo[133107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkmvnjlnuxljvlomkdtqaxamyhwkzxvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161033.6856782-124-249558399270437/AnsiballZ_stat.py'
Nov 26 12:43:53 compute-0 sudo[133107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:53 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:54 compute-0 python3.9[133109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:43:54 compute-0 sudo[133107]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:54 compute-0 sudo[133230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbepkxxrzmjbvaklkjytxyeobaogfego ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161033.6856782-124-249558399270437/AnsiballZ_copy.py'
Nov 26 12:43:54 compute-0 sudo[133230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:54 compute-0 python3.9[133232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161033.6856782-124-249558399270437/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a78596f424093bc5574244d010f70c6e099d950f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:54 compute-0 sudo[133230]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:54 compute-0 sudo[133382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szlgisgiiugyacfrrzipwcbgucvzwxep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161034.6498072-124-214167972970372/AnsiballZ_stat.py'
Nov 26 12:43:54 compute-0 sudo[133382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:54 compute-0 python3.9[133384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:43:54 compute-0 sudo[133382]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:55 compute-0 ceph-mon[74966]: pgmap v295: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:55 compute-0 sudo[133505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzpomiqjknfavgdvfzeaxuuuxbviflgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161034.6498072-124-214167972970372/AnsiballZ_copy.py'
Nov 26 12:43:55 compute-0 sudo[133505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:55 compute-0 python3.9[133507]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161034.6498072-124-214167972970372/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=dec74b950237e7b2512d888c01e1656030773611 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:55 compute-0 sudo[133505]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:55 compute-0 sudo[133657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guoqolkcoiwwtfodwxiylrfjjindmvgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161035.520351-168-39318987548137/AnsiballZ_file.py'
Nov 26 12:43:55 compute-0 sudo[133657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:55 compute-0 python3.9[133659]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:43:55 compute-0 sudo[133657]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:55 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 26 12:43:55 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:43:55 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 26 12:43:56 compute-0 sudo[133809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raebvxjulrcicljmnmrkwkhxzdzoyawr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161035.9586053-168-200214690609026/AnsiballZ_file.py'
Nov 26 12:43:56 compute-0 sudo[133809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:56 compute-0 python3.9[133811]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:43:56 compute-0 sudo[133809]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:56 compute-0 sudo[133961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvzcdyddufgbxtxrwxdnyckxvaxqvzfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161036.4187741-183-261956650564171/AnsiballZ_stat.py'
Nov 26 12:43:56 compute-0 sudo[133961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:56 compute-0 python3.9[133963]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:43:56 compute-0 sudo[133961]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:56 compute-0 sudo[134084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arxiwprhhrdtwplbfrmyljghpahemcuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161036.4187741-183-261956650564171/AnsiballZ_copy.py'
Nov 26 12:43:56 compute-0 sudo[134084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:57 compute-0 python3.9[134086]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161036.4187741-183-261956650564171/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=03cf771e66ad07a5c7a4525fd0c7443e989e3317 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:57 compute-0 sudo[134084]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:57 compute-0 ceph-mon[74966]: 9.1e scrub starts
Nov 26 12:43:57 compute-0 ceph-mon[74966]: pgmap v296: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:57 compute-0 ceph-mon[74966]: 9.1e scrub ok
Nov 26 12:43:57 compute-0 sudo[134236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqwqbdgzerdrrtkzmgevovoskbfjqqdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161037.2417974-183-149220296729121/AnsiballZ_stat.py'
Nov 26 12:43:57 compute-0 sudo[134236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:57 compute-0 python3.9[134238]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:43:57 compute-0 sudo[134236]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:57 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 26 12:43:57 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 26 12:43:57 compute-0 sudo[134359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odwrprrupciqiklpwaxfkjyvuxkrkgml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161037.2417974-183-149220296729121/AnsiballZ_copy.py'
Nov 26 12:43:57 compute-0 sudo[134359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:57 compute-0 python3.9[134361]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161037.2417974-183-149220296729121/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a78596f424093bc5574244d010f70c6e099d950f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:57 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:57 compute-0 sudo[134359]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:58 compute-0 ceph-mon[74966]: 6.c scrub starts
Nov 26 12:43:58 compute-0 ceph-mon[74966]: 6.c scrub ok
Nov 26 12:43:58 compute-0 sudo[134511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epzhsfdtrpokommbwfjtdhoebnttrxon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161038.0552351-183-180112147401526/AnsiballZ_stat.py'
Nov 26 12:43:58 compute-0 sudo[134511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:58 compute-0 python3.9[134513]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:43:58 compute-0 sudo[134511]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:58 compute-0 sudo[134634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lttebrpaumokypbuquvtrkwmdmpxpbni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161038.0552351-183-180112147401526/AnsiballZ_copy.py'
Nov 26 12:43:58 compute-0 sudo[134634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:58 compute-0 python3.9[134636]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161038.0552351-183-180112147401526/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=abcccd8f7b37b72b4b6d0d27fa061a87cf2f1ea7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:43:58 compute-0 sudo[134634]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:59 compute-0 ceph-mon[74966]: pgmap v297: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:43:59 compute-0 sudo[134786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcrthjpkbodwggoplyfdrlawykvlatxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161039.3880281-243-105499307534050/AnsiballZ_file.py'
Nov 26 12:43:59 compute-0 sudo[134786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:43:59 compute-0 python3.9[134788]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:43:59 compute-0 sudo[134786]: pam_unix(sudo:session): session closed for user root
Nov 26 12:43:59 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:00 compute-0 sudo[134938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttrlhwyjjbryrlgyhubmwjqucgfshhjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161039.880723-251-177241378506910/AnsiballZ_stat.py'
Nov 26 12:44:00 compute-0 sudo[134938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:00 compute-0 python3.9[134940]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:00 compute-0 sudo[134938]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:00 compute-0 sudo[135061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwulhuqtlganrwxfghjnoxignxbkmgap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161039.880723-251-177241378506910/AnsiballZ_copy.py'
Nov 26 12:44:00 compute-0 sudo[135061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:00 compute-0 python3.9[135063]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161039.880723-251-177241378506910/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7c9073e58b305b24b8ebef88eac378fe26a8dfa0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:00 compute-0 sudo[135061]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:44:01 compute-0 sudo[135213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqumztxtnblxuoykcgdomvunhdevpysp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161040.8407502-267-7626946490981/AnsiballZ_file.py'
Nov 26 12:44:01 compute-0 sudo[135213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:01 compute-0 ceph-mon[74966]: pgmap v298: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:01 compute-0 python3.9[135215]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:44:01 compute-0 sudo[135213]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:01 compute-0 sudo[135365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmhuygmxvlbbfxlhohgofigzycwocrpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161041.3289576-275-23312911899851/AnsiballZ_stat.py'
Nov 26 12:44:01 compute-0 sudo[135365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:01 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 26 12:44:01 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 26 12:44:01 compute-0 python3.9[135367]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:01 compute-0 sudo[135365]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:01 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:01 compute-0 sudo[135488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xilszgnatjwncujvidiwteellxjaoewa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161041.3289576-275-23312911899851/AnsiballZ_copy.py'
Nov 26 12:44:01 compute-0 sudo[135488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:02 compute-0 python3.9[135490]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161041.3289576-275-23312911899851/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7c9073e58b305b24b8ebef88eac378fe26a8dfa0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:02 compute-0 sudo[135488]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:02 compute-0 ceph-mon[74966]: 6.4 scrub starts
Nov 26 12:44:02 compute-0 ceph-mon[74966]: 6.4 scrub ok
Nov 26 12:44:02 compute-0 sudo[135640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vichairdqfjkdodkzklubytcmnpipxxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161042.3139277-291-17517835254568/AnsiballZ_file.py'
Nov 26 12:44:02 compute-0 sudo[135640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:02 compute-0 python3.9[135642]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:44:02 compute-0 sudo[135640]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:02 compute-0 sudo[135792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbtvaxloevqvvamnszetlbdbmouhbjxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161042.7945852-299-81867299563036/AnsiballZ_stat.py'
Nov 26 12:44:02 compute-0 sudo[135792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:03 compute-0 python3.9[135794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:03 compute-0 sudo[135792]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:03 compute-0 ceph-mon[74966]: pgmap v299: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:03 compute-0 sudo[135915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxtcitnbenesdqrhzsaaehyrhmwnhcqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161042.7945852-299-81867299563036/AnsiballZ_copy.py'
Nov 26 12:44:03 compute-0 sudo[135915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:03 compute-0 python3.9[135917]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161042.7945852-299-81867299563036/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7c9073e58b305b24b8ebef88eac378fe26a8dfa0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:03 compute-0 sudo[135915]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:03 compute-0 sudo[136067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmdvgmbdjtaybojjyygxeatirpodexwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161043.7495527-315-53258518779161/AnsiballZ_file.py'
Nov 26 12:44:03 compute-0 sudo[136067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:03 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:04 compute-0 python3.9[136069]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:44:04 compute-0 sudo[136067]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:04 compute-0 sudo[136219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzxehoheciwvtvptpivbkqjcusxopuhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161044.253786-323-119407292678162/AnsiballZ_stat.py'
Nov 26 12:44:04 compute-0 sudo[136219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:04 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 26 12:44:04 compute-0 python3.9[136221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:04 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 26 12:44:04 compute-0 sudo[136219]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:04 compute-0 sudo[136342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saoikzgzztkoineevqgwyxqqzyxrawud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161044.253786-323-119407292678162/AnsiballZ_copy.py'
Nov 26 12:44:04 compute-0 sudo[136342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:05 compute-0 python3.9[136344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161044.253786-323-119407292678162/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7c9073e58b305b24b8ebef88eac378fe26a8dfa0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:05 compute-0 sudo[136342]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:05 compute-0 ceph-mon[74966]: pgmap v300: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:05 compute-0 ceph-mon[74966]: 6.b scrub starts
Nov 26 12:44:05 compute-0 ceph-mon[74966]: 6.b scrub ok
Nov 26 12:44:05 compute-0 sudo[136494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szthpqhzelhykcssjszbivfuuxqeltus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161045.2106512-339-83750841111227/AnsiballZ_file.py'
Nov 26 12:44:05 compute-0 sudo[136494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:05 compute-0 python3.9[136496]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:44:05 compute-0 sudo[136494]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:44:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:44:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:44:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:44:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:44:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:44:05 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:44:05 compute-0 sudo[136646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nshyjekorazpumjelvscmdowuqtxkofb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161045.7734694-347-261900215496543/AnsiballZ_stat.py'
Nov 26 12:44:05 compute-0 sudo[136646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:06 compute-0 python3.9[136648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:06 compute-0 sudo[136646]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:06 compute-0 sudo[136769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaydiipcljgqqdcdegmafcozlyweeewu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161045.7734694-347-261900215496543/AnsiballZ_copy.py'
Nov 26 12:44:06 compute-0 sudo[136769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:06 compute-0 python3.9[136771]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161045.7734694-347-261900215496543/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7c9073e58b305b24b8ebef88eac378fe26a8dfa0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:06 compute-0 sudo[136769]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:06 compute-0 sudo[136921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnwcryhjrmekljiabeqsfgbgzgtltpox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161046.762887-363-135467765840883/AnsiballZ_file.py'
Nov 26 12:44:06 compute-0 sudo[136921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:07 compute-0 python3.9[136923]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:44:07 compute-0 sudo[136921]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:07 compute-0 ceph-mon[74966]: pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:07 compute-0 sudo[137073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdemiznyfgkfuzlbjnphmwphuzeavyke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161047.2883492-371-149110817839969/AnsiballZ_stat.py'
Nov 26 12:44:07 compute-0 sudo[137073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:07 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 26 12:44:07 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 26 12:44:07 compute-0 python3.9[137075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:07 compute-0 sudo[137073]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:07 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:07 compute-0 sudo[137196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwcbjnbkccbjxklttukcwuktwpxbzolr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161047.2883492-371-149110817839969/AnsiballZ_copy.py'
Nov 26 12:44:07 compute-0 sudo[137196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:08 compute-0 python3.9[137198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161047.2883492-371-149110817839969/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7c9073e58b305b24b8ebef88eac378fe26a8dfa0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:08 compute-0 sudo[137196]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:08 compute-0 ceph-mon[74966]: 6.d scrub starts
Nov 26 12:44:08 compute-0 ceph-mon[74966]: 6.d scrub ok
Nov 26 12:44:08 compute-0 sshd-session[131094]: Connection closed by 192.168.122.30 port 42442
Nov 26 12:44:08 compute-0 sshd-session[131091]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:44:08 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Nov 26 12:44:08 compute-0 systemd[1]: session-43.scope: Consumed 17.345s CPU time.
Nov 26 12:44:08 compute-0 systemd-logind[777]: Session 43 logged out. Waiting for processes to exit.
Nov 26 12:44:08 compute-0 systemd-logind[777]: Removed session 43.
Nov 26 12:44:09 compute-0 ceph-mon[74966]: pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:09 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:44:11 compute-0 ceph-mon[74966]: pgmap v303: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:11 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 26 12:44:11 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 26 12:44:11 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:12 compute-0 ceph-mon[74966]: 9.15 scrub starts
Nov 26 12:44:12 compute-0 ceph-mon[74966]: 9.15 scrub ok
Nov 26 12:44:13 compute-0 ceph-mon[74966]: pgmap v304: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:13 compute-0 sshd-session[137223]: Accepted publickey for zuul from 192.168.122.30 port 36758 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:44:13 compute-0 systemd-logind[777]: New session 44 of user zuul.
Nov 26 12:44:13 compute-0 systemd[1]: Started Session 44 of User zuul.
Nov 26 12:44:13 compute-0 sshd-session[137223]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:44:13 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:14 compute-0 sudo[137376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oofljdctjncwmaeyxbctdanqctnhvivk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161053.6884172-22-171703425061920/AnsiballZ_file.py'
Nov 26 12:44:14 compute-0 sudo[137376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:14 compute-0 python3.9[137378]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:14 compute-0 sudo[137376]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:14 compute-0 sudo[137528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbunlcsopzgjclyjxypeaqxjrithognk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161054.3973942-34-233556076058310/AnsiballZ_stat.py'
Nov 26 12:44:14 compute-0 sudo[137528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:14 compute-0 python3.9[137530]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:14 compute-0 sudo[137528]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:15 compute-0 ceph-mon[74966]: pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:15 compute-0 sudo[137651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvkeessfarsblbfnetgjlqkjfadimvlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161054.3973942-34-233556076058310/AnsiballZ_copy.py'
Nov 26 12:44:15 compute-0 sudo[137651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:15 compute-0 python3.9[137653]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161054.3973942-34-233556076058310/.source.conf _original_basename=ceph.conf follow=False checksum=547d467ffd9717c8e35ff6810ca30a44e880cfdb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:15 compute-0 sudo[137651]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:15 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 26 12:44:15 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 26 12:44:15 compute-0 sudo[137803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcomdawkfdqtetwpgpopihluungrhnvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161055.5687437-34-19694786167669/AnsiballZ_stat.py'
Nov 26 12:44:15 compute-0 sudo[137803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:15 compute-0 python3.9[137805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:15 compute-0 sudo[137803]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:15 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:44:16 compute-0 ceph-mon[74966]: 9.1f scrub starts
Nov 26 12:44:16 compute-0 ceph-mon[74966]: 9.1f scrub ok
Nov 26 12:44:16 compute-0 sudo[137926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwwlqbgfzwfcgdqhzfqdqjtukayvtcbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161055.5687437-34-19694786167669/AnsiballZ_copy.py'
Nov 26 12:44:16 compute-0 sudo[137926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:16 compute-0 python3.9[137928]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161055.5687437-34-19694786167669/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=c49cad1c73fc246f2066e2f44ed85f4bdde7800e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:16 compute-0 sudo[137926]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:16 compute-0 sshd-session[137226]: Connection closed by 192.168.122.30 port 36758
Nov 26 12:44:16 compute-0 sshd-session[137223]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:44:16 compute-0 systemd-logind[777]: Session 44 logged out. Waiting for processes to exit.
Nov 26 12:44:16 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Nov 26 12:44:16 compute-0 systemd[1]: session-44.scope: Consumed 2.149s CPU time.
Nov 26 12:44:16 compute-0 systemd-logind[777]: Removed session 44.
Nov 26 12:44:17 compute-0 ceph-mon[74966]: pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:17 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:19 compute-0 ceph-mon[74966]: pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:19 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:44:21 compute-0 ceph-mon[74966]: pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:21 compute-0 sshd-session[137953]: Accepted publickey for zuul from 192.168.122.30 port 58104 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:44:21 compute-0 systemd-logind[777]: New session 45 of user zuul.
Nov 26 12:44:21 compute-0 systemd[1]: Started Session 45 of User zuul.
Nov 26 12:44:21 compute-0 sshd-session[137953]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:44:21 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:22 compute-0 python3.9[138106]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:44:23 compute-0 ceph-mon[74966]: pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:23 compute-0 sudo[138260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgzftqhlgcwtojrsffnqpylaungenbvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161062.9981983-34-203251878979024/AnsiballZ_file.py'
Nov 26 12:44:23 compute-0 sudo[138260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:23 compute-0 python3.9[138262]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:44:23 compute-0 sudo[138260]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:23 compute-0 sudo[138412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzpaqfywyczxnnjqpwmckxaplufkoonj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161063.6157782-34-265125904148626/AnsiballZ_file.py'
Nov 26 12:44:23 compute-0 sudo[138412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:23 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:23 compute-0 python3.9[138414]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:44:24 compute-0 sudo[138412]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:24 compute-0 python3.9[138564]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:44:25 compute-0 sudo[138714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lorluyvxwkarqqogskefxhnkgocxcqwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161064.782756-57-245487065741590/AnsiballZ_seboolean.py'
Nov 26 12:44:25 compute-0 sudo[138714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:25 compute-0 ceph-mon[74966]: pgmap v310: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:25 compute-0 python3.9[138716]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 26 12:44:25 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:44:26 compute-0 sudo[138714]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:26 compute-0 sudo[138870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxspgyrfsnepzzbejyujjskrhzclfgyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161066.2765212-67-63644033181290/AnsiballZ_setup.py'
Nov 26 12:44:26 compute-0 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 26 12:44:26 compute-0 sudo[138870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:26 compute-0 python3.9[138872]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:44:26 compute-0 sudo[138870]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:27 compute-0 sudo[138954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjimwnjtkgxsntjfjwldetjjexzkrvbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161066.2765212-67-63644033181290/AnsiballZ_dnf.py'
Nov 26 12:44:27 compute-0 sudo[138954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:27 compute-0 ceph-mon[74966]: pgmap v311: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:27 compute-0 python3.9[138956]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:44:27 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:28 compute-0 sudo[138954]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:28 compute-0 sudo[139107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjvooctdvovyjuccmpzzbixnlrspcbnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161068.5342937-79-186779614854498/AnsiballZ_systemd.py'
Nov 26 12:44:28 compute-0 sudo[139107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:29 compute-0 python3.9[139109]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 12:44:29 compute-0 ceph-mon[74966]: pgmap v312: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:29 compute-0 sudo[139107]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:29 compute-0 sudo[139262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-depxevshzdyadoiqbzwzudzkoghwxeru ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764161069.4035103-87-153391618820357/AnsiballZ_edpm_nftables_snippet.py'
Nov 26 12:44:29 compute-0 sudo[139262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:29 compute-0 python3[139264]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 26 12:44:29 compute-0 sudo[139262]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:29 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:30 compute-0 sudo[139414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eourwyirhojwcqbvkendzktjyclueggu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161070.0537734-96-84479204479284/AnsiballZ_file.py'
Nov 26 12:44:30 compute-0 sudo[139414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:30 compute-0 python3.9[139416]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:30 compute-0 sudo[139414]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:30 compute-0 sudo[139566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdxanlncudjczfpkercqwbtesfiqmuwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161070.505388-104-231272378992964/AnsiballZ_stat.py'
Nov 26 12:44:30 compute-0 sudo[139566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:30 compute-0 python3.9[139568]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:44:30 compute-0 sudo[139566]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:31 compute-0 sudo[139644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkywfbgdzzkstazjfymlcgfqvpkjthfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161070.505388-104-231272378992964/AnsiballZ_file.py'
Nov 26 12:44:31 compute-0 sudo[139644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:31 compute-0 ceph-mon[74966]: pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:31 compute-0 python3.9[139646]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:31 compute-0 sudo[139644]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:31 compute-0 sudo[139796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fscvztmpxbibdskznxfqfqamwfihnqcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161071.448428-116-61368497380031/AnsiballZ_stat.py'
Nov 26 12:44:31 compute-0 sudo[139796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:31 compute-0 python3.9[139798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:31 compute-0 sudo[139796]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:31 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:31 compute-0 sudo[139874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojretyhpcapezyzkfghzauufyxbfzuft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161071.448428-116-61368497380031/AnsiballZ_file.py'
Nov 26 12:44:32 compute-0 sudo[139874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:32 compute-0 python3.9[139876]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xdtpfoiu recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:32 compute-0 sudo[139874]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:32 compute-0 sudo[139877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:44:32 compute-0 sudo[139877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:32 compute-0 sudo[139877]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:32 compute-0 sudo[139926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:44:32 compute-0 sudo[139926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:32 compute-0 sudo[139926]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:32 compute-0 sudo[139974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:44:32 compute-0 sudo[139974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:32 compute-0 sudo[139974]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:32 compute-0 sudo[140028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:44:32 compute-0 sudo[140028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:32 compute-0 sudo[140128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emrbzgytzzxhynoensppcgcxmstdsipu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161072.3310053-128-47607329298784/AnsiballZ_stat.py'
Nov 26 12:44:32 compute-0 sudo[140128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:32 compute-0 python3.9[140137]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:32 compute-0 sudo[140128]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:32 compute-0 sudo[140028]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:32 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:44:32 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:44:32 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:44:32 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:44:32 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:44:32 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:44:32 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 1b2616eb-e4b3-4d90-ad3b-78315eac5626 does not exist
Nov 26 12:44:32 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev f83aa3a3-3339-4cac-ae8a-75c051ea3155 does not exist
Nov 26 12:44:32 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 34053152-2fec-4ec2-a7af-1f7e46863e57 does not exist
Nov 26 12:44:32 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:44:32 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:44:32 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:44:32 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:44:32 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:44:32 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:44:32 compute-0 sudo[140183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:44:32 compute-0 sudo[140183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:32 compute-0 sudo[140183]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:32 compute-0 sudo[140280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyfenwhypnbgyhydyjzzibdcsyqomdix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161072.3310053-128-47607329298784/AnsiballZ_file.py'
Nov 26 12:44:32 compute-0 sudo[140280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:32 compute-0 sudo[140236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:44:32 compute-0 sudo[140236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:32 compute-0 sudo[140236]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:32 compute-0 sudo[140286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:44:32 compute-0 sudo[140286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:32 compute-0 sudo[140286]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:33 compute-0 sudo[140311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:44:33 compute-0 sudo[140311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:33 compute-0 python3.9[140284]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:33 compute-0 sudo[140280]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:33 compute-0 ceph-mon[74966]: pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:33 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:44:33 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:44:33 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:44:33 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:44:33 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:44:33 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:44:33 compute-0 podman[140404]: 2025-11-26 12:44:33.3305205 +0000 UTC m=+0.042124011 container create aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:44:33 compute-0 systemd[1]: Started libpod-conmon-aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df.scope.
Nov 26 12:44:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:44:33 compute-0 podman[140404]: 2025-11-26 12:44:33.405502204 +0000 UTC m=+0.117105725 container init aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:44:33 compute-0 podman[140404]: 2025-11-26 12:44:33.314316846 +0000 UTC m=+0.025920377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:44:33 compute-0 podman[140404]: 2025-11-26 12:44:33.411591707 +0000 UTC m=+0.123195219 container start aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 12:44:33 compute-0 podman[140404]: 2025-11-26 12:44:33.414298354 +0000 UTC m=+0.125901875 container attach aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 12:44:33 compute-0 determined_galileo[140456]: 167 167
Nov 26 12:44:33 compute-0 systemd[1]: libpod-aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df.scope: Deactivated successfully.
Nov 26 12:44:33 compute-0 podman[140404]: 2025-11-26 12:44:33.418724325 +0000 UTC m=+0.130327836 container died aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galileo, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-48f0a113ce40f0952c9701e717b2f338aa0920b6bba575ec3a60b95e2b9288e8-merged.mount: Deactivated successfully.
Nov 26 12:44:33 compute-0 podman[140404]: 2025-11-26 12:44:33.446167507 +0000 UTC m=+0.157771019 container remove aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_galileo, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:44:33 compute-0 systemd[1]: libpod-conmon-aceddc5adf862aef38cd2be3442a7f906476eaa1eaebeb9604b6e71f0b7e94df.scope: Deactivated successfully.
Nov 26 12:44:33 compute-0 podman[140502]: 2025-11-26 12:44:33.595026558 +0000 UTC m=+0.040918571 container create 4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_stonebraker, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:44:33 compute-0 systemd[1]: Started libpod-conmon-4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043.scope.
Nov 26 12:44:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28df757908398eda10e81d5db73cc90997c7dd5e7b93105b61d0ab2237e548b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28df757908398eda10e81d5db73cc90997c7dd5e7b93105b61d0ab2237e548b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28df757908398eda10e81d5db73cc90997c7dd5e7b93105b61d0ab2237e548b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28df757908398eda10e81d5db73cc90997c7dd5e7b93105b61d0ab2237e548b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28df757908398eda10e81d5db73cc90997c7dd5e7b93105b61d0ab2237e548b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:44:33 compute-0 sudo[140568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkhxfvesvciemnglrevfrujsgtahicus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161073.291674-141-163233265588553/AnsiballZ_command.py'
Nov 26 12:44:33 compute-0 podman[140502]: 2025-11-26 12:44:33.580213302 +0000 UTC m=+0.026105315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:44:33 compute-0 sudo[140568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:33 compute-0 podman[140502]: 2025-11-26 12:44:33.677745398 +0000 UTC m=+0.123637401 container init 4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_stonebraker, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 12:44:33 compute-0 podman[140502]: 2025-11-26 12:44:33.687705298 +0000 UTC m=+0.133597301 container start 4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 12:44:33 compute-0 podman[140502]: 2025-11-26 12:44:33.689060098 +0000 UTC m=+0.134952111 container attach 4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 12:44:33 compute-0 python3.9[140570]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:44:33 compute-0 sudo[140568]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:33 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:34 compute-0 sudo[140731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynbrnhxjgprcpjytmiudqofdtkkwenbt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764161074.004673-149-210009653950556/AnsiballZ_edpm_nftables_from_files.py'
Nov 26 12:44:34 compute-0 sudo[140731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:34 compute-0 python3[140733]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 12:44:34 compute-0 sudo[140731]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:34 compute-0 youthful_stonebraker[140555]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:44:34 compute-0 youthful_stonebraker[140555]: --> relative data size: 1.0
Nov 26 12:44:34 compute-0 youthful_stonebraker[140555]: --> All data devices are unavailable
Nov 26 12:44:34 compute-0 systemd[1]: libpod-4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043.scope: Deactivated successfully.
Nov 26 12:44:34 compute-0 podman[140502]: 2025-11-26 12:44:34.616439008 +0000 UTC m=+1.062331012 container died 4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 12:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-28df757908398eda10e81d5db73cc90997c7dd5e7b93105b61d0ab2237e548b7-merged.mount: Deactivated successfully.
Nov 26 12:44:34 compute-0 podman[140502]: 2025-11-26 12:44:34.64951796 +0000 UTC m=+1.095409963 container remove 4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_stonebraker, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:44:34 compute-0 systemd[1]: libpod-conmon-4349a363c823d0eaec6e7b68c7af9aff0f19981501278cc70f8720a74ab62043.scope: Deactivated successfully.
Nov 26 12:44:34 compute-0 sudo[140311]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:34 compute-0 sudo[140788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:44:34 compute-0 sudo[140788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:34 compute-0 sudo[140788]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:34 compute-0 sudo[140846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:44:34 compute-0 sudo[140846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:34 compute-0 sudo[140846]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:34 compute-0 sudo[140887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:44:34 compute-0 sudo[140887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:34 compute-0 sudo[140887]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:34 compute-0 sudo[140935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:44:34 compute-0 sudo[140935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:34 compute-0 sudo[141010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iamsheeluwfyawkbbrocuxwuabbveura ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161074.709371-157-115719914962467/AnsiballZ_stat.py'
Nov 26 12:44:34 compute-0 sudo[141010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:35 compute-0 python3.9[141012]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:35 compute-0 sudo[141010]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:35 compute-0 podman[141051]: 2025-11-26 12:44:35.221803649 +0000 UTC m=+0.039644421 container create 081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackburn, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:44:35 compute-0 ceph-mon[74966]: pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:35 compute-0 systemd[1]: Started libpod-conmon-081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb.scope.
Nov 26 12:44:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:44:35 compute-0 podman[141051]: 2025-11-26 12:44:35.287043585 +0000 UTC m=+0.104884377 container init 081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackburn, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:44:35 compute-0 podman[141051]: 2025-11-26 12:44:35.294570003 +0000 UTC m=+0.112410776 container start 081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackburn, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 12:44:35 compute-0 podman[141051]: 2025-11-26 12:44:35.296387685 +0000 UTC m=+0.114228458 container attach 081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:44:35 compute-0 blissful_blackburn[141108]: 167 167
Nov 26 12:44:35 compute-0 systemd[1]: libpod-081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb.scope: Deactivated successfully.
Nov 26 12:44:35 compute-0 podman[141051]: 2025-11-26 12:44:35.299148081 +0000 UTC m=+0.116988854 container died 081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackburn, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:44:35 compute-0 podman[141051]: 2025-11-26 12:44:35.206074229 +0000 UTC m=+0.023915021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:44:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-49b32a58a952e28057966d9517e32e8fc1f86e85acdb6a464e60dfa3a01f46b8-merged.mount: Deactivated successfully.
Nov 26 12:44:35 compute-0 podman[141051]: 2025-11-26 12:44:35.319859662 +0000 UTC m=+0.137700434 container remove 081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackburn, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 12:44:35 compute-0 systemd[1]: libpod-conmon-081e2c4cc99bd39d0191a245fb5231d20733361092459a5236a345fb3d11fffb.scope: Deactivated successfully.
Nov 26 12:44:35 compute-0 podman[141130]: 2025-11-26 12:44:35.469067077 +0000 UTC m=+0.042041785 container create c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:44:35 compute-0 systemd[1]: Started libpod-conmon-c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d.scope.
Nov 26 12:44:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfd016ff6f1a1d0e0ccc1144595b585b72345d5c0ca5f9b053e6d2bb742b4f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfd016ff6f1a1d0e0ccc1144595b585b72345d5c0ca5f9b053e6d2bb742b4f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfd016ff6f1a1d0e0ccc1144595b585b72345d5c0ca5f9b053e6d2bb742b4f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfd016ff6f1a1d0e0ccc1144595b585b72345d5c0ca5f9b053e6d2bb742b4f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:44:35 compute-0 podman[141130]: 2025-11-26 12:44:35.545669344 +0000 UTC m=+0.118644051 container init c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:44:35 compute-0 podman[141130]: 2025-11-26 12:44:35.453288635 +0000 UTC m=+0.026263362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:44:35 compute-0 podman[141130]: 2025-11-26 12:44:35.551508716 +0000 UTC m=+0.124483423 container start c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:44:35 compute-0 podman[141130]: 2025-11-26 12:44:35.552632091 +0000 UTC m=+0.125606798 container attach c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 12:44:35 compute-0 sudo[141221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfzdwmzilsgcezxtgdyztggnmykdnmrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161074.709371-157-115719914962467/AnsiballZ_copy.py'
Nov 26 12:44:35 compute-0 sudo[141221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:35 compute-0 python3.9[141223]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161074.709371-157-115719914962467/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:35 compute-0 sudo[141221]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:44:35
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.mgr']
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:44:35 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:44:36 compute-0 sudo[141376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztdrpoxqlscbartobynwamatvcwmosii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161075.9246247-172-155368876192524/AnsiballZ_stat.py'
Nov 26 12:44:36 compute-0 sudo[141376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]: {
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:     "0": [
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:         {
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "devices": [
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "/dev/loop3"
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             ],
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "lv_name": "ceph_lv0",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "lv_size": "21470642176",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "name": "ceph_lv0",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "tags": {
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.cluster_name": "ceph",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.crush_device_class": "",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.encrypted": "0",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.osd_id": "0",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.type": "block",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.vdo": "0"
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             },
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "type": "block",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "vg_name": "ceph_vg0"
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:         }
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:     ],
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:     "1": [
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:         {
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "devices": [
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "/dev/loop4"
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             ],
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "lv_name": "ceph_lv1",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "lv_size": "21470642176",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "name": "ceph_lv1",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "tags": {
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.cluster_name": "ceph",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.crush_device_class": "",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.encrypted": "0",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.osd_id": "1",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.type": "block",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.vdo": "0"
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             },
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "type": "block",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "vg_name": "ceph_vg1"
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:         }
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:     ],
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:     "2": [
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:         {
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "devices": [
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "/dev/loop5"
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             ],
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "lv_name": "ceph_lv2",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "lv_size": "21470642176",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "name": "ceph_lv2",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "tags": {
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.cluster_name": "ceph",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.crush_device_class": "",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.encrypted": "0",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.osd_id": "2",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.type": "block",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:                 "ceph.vdo": "0"
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             },
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "type": "block",
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:             "vg_name": "ceph_vg2"
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:         }
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]:     ]
Nov 26 12:44:36 compute-0 pensive_elgamal[141167]: }
Nov 26 12:44:36 compute-0 systemd[1]: libpod-c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d.scope: Deactivated successfully.
Nov 26 12:44:36 compute-0 conmon[141167]: conmon c25be37af7ba7b2aaf0a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d.scope/container/memory.events
Nov 26 12:44:36 compute-0 podman[141130]: 2025-11-26 12:44:36.237465705 +0000 UTC m=+0.810440422 container died c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bfd016ff6f1a1d0e0ccc1144595b585b72345d5c0ca5f9b053e6d2bb742b4f0-merged.mount: Deactivated successfully.
Nov 26 12:44:36 compute-0 podman[141130]: 2025-11-26 12:44:36.283414314 +0000 UTC m=+0.856389020 container remove c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:44:36 compute-0 systemd[1]: libpod-conmon-c25be37af7ba7b2aaf0ad9a59b907a2c38ddcde02823ef7033f3774087e5cb4d.scope: Deactivated successfully.
Nov 26 12:44:36 compute-0 sudo[140935]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:36 compute-0 python3.9[141379]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:36 compute-0 sudo[141390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:44:36 compute-0 sudo[141390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:36 compute-0 sudo[141390]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:36 compute-0 sudo[141376]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:36 compute-0 sudo[141417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:44:36 compute-0 sudo[141417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:36 compute-0 sudo[141417]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:36 compute-0 sudo[141462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:44:36 compute-0 sudo[141462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:36 compute-0 sudo[141462]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:36 compute-0 sudo[141514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:44:36 compute-0 sudo[141514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:36 compute-0 sudo[141612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohrriltafhmmapnqmsjufkwahwfzybuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161075.9246247-172-155368876192524/AnsiballZ_copy.py'
Nov 26 12:44:36 compute-0 sudo[141612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:36 compute-0 python3.9[141622]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161075.9246247-172-155368876192524/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:36 compute-0 podman[141646]: 2025-11-26 12:44:36.825881048 +0000 UTC m=+0.035404882 container create 1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noyce, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:44:36 compute-0 sudo[141612]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:36 compute-0 systemd[1]: Started libpod-conmon-1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a.scope.
Nov 26 12:44:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:44:36 compute-0 podman[141646]: 2025-11-26 12:44:36.885696322 +0000 UTC m=+0.095220165 container init 1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 12:44:36 compute-0 podman[141646]: 2025-11-26 12:44:36.893414932 +0000 UTC m=+0.102938765 container start 1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noyce, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:44:36 compute-0 podman[141646]: 2025-11-26 12:44:36.894745768 +0000 UTC m=+0.104269601 container attach 1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noyce, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 12:44:36 compute-0 serene_noyce[141667]: 167 167
Nov 26 12:44:36 compute-0 systemd[1]: libpod-1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a.scope: Deactivated successfully.
Nov 26 12:44:36 compute-0 conmon[141667]: conmon 1d7b760c8916e971f387 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a.scope/container/memory.events
Nov 26 12:44:36 compute-0 podman[141646]: 2025-11-26 12:44:36.899335257 +0000 UTC m=+0.108859090 container died 1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noyce, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:44:36 compute-0 podman[141646]: 2025-11-26 12:44:36.811734016 +0000 UTC m=+0.021257869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d22e342f50bcd46fb5fe3a750e615239bc7b05fdd8b078dcbbe30ab2618a3d2b-merged.mount: Deactivated successfully.
Nov 26 12:44:36 compute-0 podman[141646]: 2025-11-26 12:44:36.927574929 +0000 UTC m=+0.137098763 container remove 1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 12:44:36 compute-0 systemd[1]: libpod-conmon-1d7b760c8916e971f387bcf0ac51f3c741079a578f664eefc1c78fe73dae205a.scope: Deactivated successfully.
Nov 26 12:44:37 compute-0 podman[141758]: 2025-11-26 12:44:37.069490468 +0000 UTC m=+0.037611446 container create 77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 12:44:37 compute-0 systemd[1]: Started libpod-conmon-77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9.scope.
Nov 26 12:44:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/537d3da9f82ea2483dc66fb2752c7e76efa9d2b15e804ad297f287694b313bff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/537d3da9f82ea2483dc66fb2752c7e76efa9d2b15e804ad297f287694b313bff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/537d3da9f82ea2483dc66fb2752c7e76efa9d2b15e804ad297f287694b313bff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/537d3da9f82ea2483dc66fb2752c7e76efa9d2b15e804ad297f287694b313bff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:44:37 compute-0 podman[141758]: 2025-11-26 12:44:37.137999107 +0000 UTC m=+0.106120105 container init 77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 12:44:37 compute-0 podman[141758]: 2025-11-26 12:44:37.144165536 +0000 UTC m=+0.112286513 container start 77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:44:37 compute-0 podman[141758]: 2025-11-26 12:44:37.145493606 +0000 UTC m=+0.113614584 container attach 77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 12:44:37 compute-0 podman[141758]: 2025-11-26 12:44:37.054067524 +0000 UTC m=+0.022188522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:44:37 compute-0 sudo[141849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyqfkoghvtyodbxhkktcfezlmgefffoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161076.9696996-187-239240890485320/AnsiballZ_stat.py'
Nov 26 12:44:37 compute-0 sudo[141849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:37 compute-0 ceph-mon[74966]: pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:37 compute-0 python3.9[141851]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:37 compute-0 sudo[141849]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:37 compute-0 sudo[141974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyorxungwifedgxlywwzeiycwnddxsrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161076.9696996-187-239240890485320/AnsiballZ_copy.py'
Nov 26 12:44:37 compute-0 sudo[141974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:37 compute-0 python3.9[141976]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161076.9696996-187-239240890485320/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:37 compute-0 sudo[141974]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:37 compute-0 exciting_hertz[141794]: {
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:         "osd_id": 1,
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:         "type": "bluestore"
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:     },
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:         "osd_id": 2,
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:         "type": "bluestore"
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:     },
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:         "osd_id": 0,
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:         "type": "bluestore"
Nov 26 12:44:37 compute-0 exciting_hertz[141794]:     }
Nov 26 12:44:37 compute-0 exciting_hertz[141794]: }
Nov 26 12:44:37 compute-0 systemd[1]: libpod-77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9.scope: Deactivated successfully.
Nov 26 12:44:37 compute-0 podman[141758]: 2025-11-26 12:44:37.967593134 +0000 UTC m=+0.935714112 container died 77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hertz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:44:37 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-537d3da9f82ea2483dc66fb2752c7e76efa9d2b15e804ad297f287694b313bff-merged.mount: Deactivated successfully.
Nov 26 12:44:38 compute-0 podman[141758]: 2025-11-26 12:44:38.012151589 +0000 UTC m=+0.980272567 container remove 77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 12:44:38 compute-0 systemd[1]: libpod-conmon-77e88b7d2e8578bff8c2a066159e624ac39bc5aa0b21200644a0551686521ce9.scope: Deactivated successfully.
Nov 26 12:44:38 compute-0 sudo[141514]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:38 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:44:38 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:44:38 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:44:38 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:44:38 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 9e4c8dc7-61b1-48ff-a6fd-6b7c4bc46899 does not exist
Nov 26 12:44:38 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev d445658a-c4be-45e5-b9f5-bb8060dea5a8 does not exist
Nov 26 12:44:38 compute-0 sudo[142104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:44:38 compute-0 sudo[142104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:38 compute-0 sudo[142104]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:38 compute-0 sudo[142156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:44:38 compute-0 sudo[142156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:44:38 compute-0 sudo[142156]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:38 compute-0 sudo[142214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nulbmzyprweucrnpgfjngpbxayqocanu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161077.9623969-202-35931498877689/AnsiballZ_stat.py'
Nov 26 12:44:38 compute-0 sudo[142214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:38 compute-0 python3.9[142216]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:38 compute-0 sudo[142214]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:38 compute-0 sudo[142339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vszcqutlzphrynmanottymdkxegegawl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161077.9623969-202-35931498877689/AnsiballZ_copy.py'
Nov 26 12:44:38 compute-0 sudo[142339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:38 compute-0 python3.9[142341]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161077.9623969-202-35931498877689/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:38 compute-0 sudo[142339]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:39 compute-0 ceph-mon[74966]: pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:44:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:44:39 compute-0 sudo[142491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyzwiajnhqfdrrkgomwcugyshqpeeuvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161078.931722-217-130550727170573/AnsiballZ_stat.py'
Nov 26 12:44:39 compute-0 sudo[142491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:39 compute-0 python3.9[142493]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:39 compute-0 sudo[142491]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:39 compute-0 sudo[142616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxqueaocbkquutcwrsozjhvrsvivrdkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161078.931722-217-130550727170573/AnsiballZ_copy.py'
Nov 26 12:44:39 compute-0 sudo[142616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:39 compute-0 python3.9[142618]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161078.931722-217-130550727170573/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:39 compute-0 sudo[142616]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:39 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:40 compute-0 sudo[142768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgywtciwdenryhudgkvvgovxxymnxqaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161079.911647-232-189126059360555/AnsiballZ_file.py'
Nov 26 12:44:40 compute-0 sudo[142768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:40 compute-0 python3.9[142770]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:40 compute-0 sudo[142768]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:40 compute-0 sudo[142920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfgpozwpzkgrqhjrcrzjnklsasntgtgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161080.3942583-240-66043714799733/AnsiballZ_command.py'
Nov 26 12:44:40 compute-0 sudo[142920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:40 compute-0 python3.9[142922]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:44:40 compute-0 sudo[142920]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:44:41 compute-0 ceph-mon[74966]: pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:41 compute-0 sudo[143075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pumknweetugelxeaerrwqoptlpyvtosj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161080.8914819-248-241515402027083/AnsiballZ_blockinfile.py'
Nov 26 12:44:41 compute-0 sudo[143075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:41 compute-0 python3.9[143077]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:41 compute-0 sudo[143075]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:41 compute-0 sudo[143227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obbmicnipqkaoblbqkokhpgyskxvxscu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161081.5211954-257-195867743622810/AnsiballZ_command.py'
Nov 26 12:44:41 compute-0 sudo[143227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:41 compute-0 python3.9[143229]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:44:41 compute-0 sudo[143227]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:41 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:42 compute-0 sudo[143380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvdxhhizqbfovydmiypfbjwhansdghdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161081.9859076-265-167406159730827/AnsiballZ_stat.py'
Nov 26 12:44:42 compute-0 sudo[143380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:42 compute-0 python3.9[143382]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:44:42 compute-0 sudo[143380]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:42 compute-0 sshd-session[71362]: Received disconnect from 192.168.26.112 port 43822:11: disconnected by user
Nov 26 12:44:42 compute-0 sshd-session[71362]: Disconnected from user zuul 192.168.26.112 port 43822
Nov 26 12:44:42 compute-0 sshd-session[71359]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:44:42 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 26 12:44:42 compute-0 systemd[1]: session-17.scope: Consumed 1min 1.230s CPU time.
Nov 26 12:44:42 compute-0 systemd-logind[777]: Session 17 logged out. Waiting for processes to exit.
Nov 26 12:44:42 compute-0 systemd-logind[777]: Removed session 17.
Nov 26 12:44:42 compute-0 sudo[143534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdtpifvvhxreoencqphovgagctfcildm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161082.4352202-273-134219645033098/AnsiballZ_command.py'
Nov 26 12:44:42 compute-0 sudo[143534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:42 compute-0 python3.9[143536]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:44:42 compute-0 sudo[143534]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:43 compute-0 sudo[143689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anilslrzxehvubcwfjnlyxmdbwpijnkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161082.8929193-281-58215799510838/AnsiballZ_file.py'
Nov 26 12:44:43 compute-0 sudo[143689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:43 compute-0 ceph-mon[74966]: pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:43 compute-0 python3.9[143691]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:43 compute-0 sudo[143689]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:43 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:44 compute-0 python3.9[143841]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:44:44 compute-0 sudo[143992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yselhdpcuyjzpsqnrtxfatzvbvtyojjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161084.4225104-321-88047842232482/AnsiballZ_command.py'
Nov 26 12:44:44 compute-0 sudo[143992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:44 compute-0 python3.9[143994]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:44:44 compute-0 ovs-vsctl[143995]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 26 12:44:44 compute-0 sudo[143992]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:44:45 compute-0 sudo[144145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aojvrllpapbmxfjfdnkckjqqbstfzind ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161084.895121-330-237489398881227/AnsiballZ_command.py'
Nov 26 12:44:45 compute-0 sudo[144145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:45 compute-0 ceph-mon[74966]: pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:45 compute-0 python3.9[144147]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:44:45 compute-0 sudo[144145]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:45 compute-0 sudo[144300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkhsboutfdbrvgubvtemeexcumqfwyjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161085.3422983-338-198218570219388/AnsiballZ_command.py'
Nov 26 12:44:45 compute-0 sudo[144300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:45 compute-0 python3.9[144302]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:44:45 compute-0 ovs-vsctl[144303]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 26 12:44:45 compute-0 sudo[144300]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:44:45 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:46 compute-0 python3.9[144453]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:44:46 compute-0 sudo[144605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upucgdbqapjmrhacptwwugbnznonzxva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161086.26302-355-210899152151987/AnsiballZ_file.py'
Nov 26 12:44:46 compute-0 sudo[144605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:46 compute-0 python3.9[144607]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:44:46 compute-0 sudo[144605]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:46 compute-0 sudo[144757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctghobodkfoeglnqzbgevbmyncvceojj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161086.7057416-363-66681142337976/AnsiballZ_stat.py'
Nov 26 12:44:46 compute-0 sudo[144757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:47 compute-0 python3.9[144759]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:47 compute-0 sudo[144757]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:47 compute-0 ceph-mon[74966]: pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:47 compute-0 sudo[144835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztihakmzyfbyslkiipecdssiewjcwkqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161086.7057416-363-66681142337976/AnsiballZ_file.py'
Nov 26 12:44:47 compute-0 sudo[144835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:47 compute-0 python3.9[144837]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:44:47 compute-0 sudo[144835]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:47 compute-0 sudo[144987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knvixgebluhcfdqlywpcmxvubaklxufp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161087.439684-363-152332915530156/AnsiballZ_stat.py'
Nov 26 12:44:47 compute-0 sudo[144987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:47 compute-0 python3.9[144989]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:47 compute-0 sudo[144987]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:47 compute-0 sudo[145065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amgidkvljmewnpqmshqbxntsorgwpwdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161087.439684-363-152332915530156/AnsiballZ_file.py'
Nov 26 12:44:47 compute-0 sudo[145065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:47 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:48 compute-0 python3.9[145067]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:44:48 compute-0 sudo[145065]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:48 compute-0 sudo[145217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfguutpchokkqcavaroojcvxkuwkguqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161088.1872022-386-190152080472726/AnsiballZ_file.py'
Nov 26 12:44:48 compute-0 sudo[145217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:48 compute-0 python3.9[145219]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:48 compute-0 sudo[145217]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:48 compute-0 sudo[145369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjvrscluovkpjtezonghqoeojaajoxoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161088.6376195-394-71660985660382/AnsiballZ_stat.py'
Nov 26 12:44:48 compute-0 sudo[145369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:48 compute-0 python3.9[145371]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:48 compute-0 sudo[145369]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:49 compute-0 ceph-mon[74966]: pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:49 compute-0 sudo[145447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tafynxdxwkbcgvfjkzqstpcnepjolvkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161088.6376195-394-71660985660382/AnsiballZ_file.py'
Nov 26 12:44:49 compute-0 sudo[145447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:49 compute-0 python3.9[145449]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:49 compute-0 sudo[145447]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:49 compute-0 sudo[145599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpjveqvhddwkxhsuxpwwprhijuqanojm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161089.4074736-406-65405097311729/AnsiballZ_stat.py'
Nov 26 12:44:49 compute-0 sudo[145599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:49 compute-0 python3.9[145601]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:49 compute-0 sudo[145599]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:49 compute-0 sudo[145677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yneyoatmuhjqksshpdepqhkbuhqgkjkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161089.4074736-406-65405097311729/AnsiballZ_file.py'
Nov 26 12:44:49 compute-0 sudo[145677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:49 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:50 compute-0 python3.9[145679]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:50 compute-0 sudo[145677]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:50 compute-0 sudo[145829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baiipsjlngexhfotaoidnmqbwervnhqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161090.1675558-418-56654486079600/AnsiballZ_systemd.py'
Nov 26 12:44:50 compute-0 sudo[145829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:50 compute-0 python3.9[145831]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:44:50 compute-0 systemd[1]: Reloading.
Nov 26 12:44:50 compute-0 systemd-rc-local-generator[145852]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:44:50 compute-0 systemd-sysv-generator[145855]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:44:50 compute-0 sudo[145829]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:44:51 compute-0 ceph-mon[74966]: pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:51 compute-0 sudo[146018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txfvtxppxtedqzmdxmxhqywgutyrjkdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161090.9590967-426-233842613171106/AnsiballZ_stat.py'
Nov 26 12:44:51 compute-0 sudo[146018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:51 compute-0 python3.9[146020]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:51 compute-0 sudo[146018]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:51 compute-0 sudo[146096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itgbxejpawbdredosxmgqhqyzqojudiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161090.9590967-426-233842613171106/AnsiballZ_file.py'
Nov 26 12:44:51 compute-0 sudo[146096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:51 compute-0 python3.9[146098]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:51 compute-0 sudo[146096]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:51 compute-0 sudo[146248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdambemwvygatmzunepcztszkvgtedwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161091.7467391-438-66604014048890/AnsiballZ_stat.py'
Nov 26 12:44:51 compute-0 sudo[146248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:51 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:52 compute-0 python3.9[146250]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:52 compute-0 sudo[146248]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:52 compute-0 sudo[146326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jonkpmmqsfxtjqpomhlrugksarqwabxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161091.7467391-438-66604014048890/AnsiballZ_file.py'
Nov 26 12:44:52 compute-0 sudo[146326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:52 compute-0 python3.9[146328]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:52 compute-0 sudo[146326]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:52 compute-0 sudo[146478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fulsrvkuxnnpvccsrwpxaithypheqeak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161092.5163586-450-264407935893832/AnsiballZ_systemd.py'
Nov 26 12:44:52 compute-0 sudo[146478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:52 compute-0 python3.9[146480]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:44:52 compute-0 systemd[1]: Reloading.
Nov 26 12:44:52 compute-0 systemd-rc-local-generator[146501]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:44:53 compute-0 systemd-sysv-generator[146504]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:44:53 compute-0 ceph-mon[74966]: pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:53 compute-0 systemd[1]: Starting Create netns directory...
Nov 26 12:44:53 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 12:44:53 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 12:44:53 compute-0 systemd[1]: Finished Create netns directory.
Nov 26 12:44:53 compute-0 sudo[146478]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:53 compute-0 sudo[146670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoohxmczjudmkuohfxcjculqivbtcqvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161093.372959-460-108136033067082/AnsiballZ_file.py'
Nov 26 12:44:53 compute-0 sudo[146670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:53 compute-0 python3.9[146672]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:44:53 compute-0 sudo[146670]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:53 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:54 compute-0 sudo[146822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iprattqoofsywirfqyncmxnsjptgthtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161093.8254797-468-102601429851239/AnsiballZ_stat.py'
Nov 26 12:44:54 compute-0 sudo[146822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:54 compute-0 python3.9[146824]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:54 compute-0 sudo[146822]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:54 compute-0 sudo[146945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlpzhyxpfdrwxytwzeknlhwxmmqtdbus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161093.8254797-468-102601429851239/AnsiballZ_copy.py'
Nov 26 12:44:54 compute-0 sudo[146945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:54 compute-0 python3.9[146947]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161093.8254797-468-102601429851239/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:44:54 compute-0 sudo[146945]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:55 compute-0 sudo[147097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gymtlwzmijmvvlfmmyletmmwbutyhawv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161094.8498287-485-27057577643209/AnsiballZ_file.py'
Nov 26 12:44:55 compute-0 sudo[147097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:55 compute-0 ceph-mon[74966]: pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:55 compute-0 python3.9[147099]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:44:55 compute-0 sudo[147097]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:55 compute-0 sudo[147249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfnadchwkxqxuukpsagyxmrjeimbmppn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161095.3653884-493-195409107116357/AnsiballZ_stat.py'
Nov 26 12:44:55 compute-0 sudo[147249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:55 compute-0 python3.9[147251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:44:55 compute-0 sudo[147249]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:55 compute-0 sudo[147372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfjgfndxxfjlujqzwuvqamepqpzjivtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161095.3653884-493-195409107116357/AnsiballZ_copy.py'
Nov 26 12:44:55 compute-0 sudo[147372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:44:55 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:56 compute-0 python3.9[147374]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161095.3653884-493-195409107116357/.source.json _original_basename=.hl_2tjq8 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:56 compute-0 sudo[147372]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:56 compute-0 sudo[147524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezviyzkhtrwscocpuvrwvsehchoumpjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161096.2797754-508-239060139017987/AnsiballZ_file.py'
Nov 26 12:44:56 compute-0 sudo[147524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:56 compute-0 python3.9[147526]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:44:56 compute-0 sudo[147524]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:57 compute-0 sudo[147676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voahfzfqwwbwycsxviphmxfeovjolixy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161096.8225145-516-261432050189411/AnsiballZ_stat.py'
Nov 26 12:44:57 compute-0 sudo[147676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:57 compute-0 ceph-mon[74966]: pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:57 compute-0 sudo[147676]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:57 compute-0 sudo[147799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kadypsuytnedxkwixzcftxzrekmbercu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161096.8225145-516-261432050189411/AnsiballZ_copy.py'
Nov 26 12:44:57 compute-0 sudo[147799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:57 compute-0 sudo[147799]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:57 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:58 compute-0 sudo[147951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpoatlrmcurgeiflmhbcyiutfshoxlvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161097.8316977-533-35761804037347/AnsiballZ_container_config_data.py'
Nov 26 12:44:58 compute-0 sudo[147951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:58 compute-0 python3.9[147953]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 26 12:44:58 compute-0 sudo[147951]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:58 compute-0 sudo[148103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umffezwhcjiyutpefliwtzfalgbfhvox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161098.4901407-542-263210754593113/AnsiballZ_container_config_hash.py'
Nov 26 12:44:58 compute-0 sudo[148103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:58 compute-0 python3.9[148105]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 12:44:58 compute-0 sudo[148103]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:59 compute-0 ceph-mon[74966]: pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:44:59 compute-0 sudo[148255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zletnruohzigqiveekozfuvghlddtfih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161099.128397-551-6365519210811/AnsiballZ_podman_container_info.py'
Nov 26 12:44:59 compute-0 sudo[148255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:44:59 compute-0 python3.9[148257]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 26 12:44:59 compute-0 sudo[148255]: pam_unix(sudo:session): session closed for user root
Nov 26 12:44:59 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:00 compute-0 sudo[148427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brjdgfzyhqblwtcyfzpkfryvdcjedtwm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764161100.1455069-564-74689557389101/AnsiballZ_edpm_container_manage.py'
Nov 26 12:45:00 compute-0 sudo[148427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:00 compute-0 python3[148429]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 12:45:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:45:01 compute-0 ceph-mon[74966]: pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:01 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:03 compute-0 ceph-mon[74966]: pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:03 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:05 compute-0 ceph-mon[74966]: pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:45:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:45:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:45:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:45:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:45:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:45:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:45:05 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:06 compute-0 podman[148440]: 2025-11-26 12:45:06.476309946 +0000 UTC m=+5.698105012 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 26 12:45:06 compute-0 podman[148536]: 2025-11-26 12:45:06.609343232 +0000 UTC m=+0.033224808 container create 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 12:45:06 compute-0 podman[148536]: 2025-11-26 12:45:06.595229903 +0000 UTC m=+0.019111489 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 26 12:45:06 compute-0 python3[148429]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 26 12:45:06 compute-0 sudo[148427]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:07 compute-0 sudo[148713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqdgxrusncdycofkzkgyqhhnqxkgviit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161106.8454561-572-238412295278942/AnsiballZ_stat.py'
Nov 26 12:45:07 compute-0 sudo[148713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:07 compute-0 ceph-mon[74966]: pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:07 compute-0 python3.9[148715]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:45:07 compute-0 sudo[148713]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:07 compute-0 sudo[148867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxuaridawofhmafexzndonhefivogsyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161107.426722-581-100935530320557/AnsiballZ_file.py'
Nov 26 12:45:07 compute-0 sudo[148867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:07 compute-0 python3.9[148869]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:45:07 compute-0 sudo[148867]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:07 compute-0 sudo[148943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oddvbxacrwcqhryevyroyqcufickzbxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161107.426722-581-100935530320557/AnsiballZ_stat.py'
Nov 26 12:45:07 compute-0 sudo[148943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:07 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:08 compute-0 python3.9[148945]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:45:08 compute-0 sudo[148943]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:08 compute-0 sudo[149094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snacekgkarqursdfzbomjpulwuwbxylm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161108.1722167-581-75682255869537/AnsiballZ_copy.py'
Nov 26 12:45:08 compute-0 sudo[149094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:08 compute-0 python3.9[149096]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764161108.1722167-581-75682255869537/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:45:08 compute-0 sudo[149094]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:08 compute-0 sudo[149170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbbxutgxbhdjbsphbuuodjfjfdiuvwke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161108.1722167-581-75682255869537/AnsiballZ_systemd.py'
Nov 26 12:45:08 compute-0 sudo[149170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:09 compute-0 python3.9[149172]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 12:45:09 compute-0 systemd[1]: Reloading.
Nov 26 12:45:09 compute-0 ceph-mon[74966]: pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:09 compute-0 systemd-rc-local-generator[149193]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:45:09 compute-0 systemd-sysv-generator[149196]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:45:09 compute-0 sudo[149170]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:09 compute-0 sudo[149281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmslfgovqxougfpayqxabnwanrwrjmzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161108.1722167-581-75682255869537/AnsiballZ_systemd.py'
Nov 26 12:45:09 compute-0 sudo[149281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:09 compute-0 python3.9[149283]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:45:09 compute-0 systemd[1]: Reloading.
Nov 26 12:45:09 compute-0 systemd-sysv-generator[149311]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:45:09 compute-0 systemd-rc-local-generator[149308]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:45:10 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:10 compute-0 systemd[1]: Starting ovn_controller container...
Nov 26 12:45:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8bf7b7e76557e3df4dcb603263dddbf8ea7838cd6ec0dda380d4162886aab8/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 26 12:45:10 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0.
Nov 26 12:45:10 compute-0 podman[149323]: 2025-11-26 12:45:10.24422118 +0000 UTC m=+0.118883259 container init 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 12:45:10 compute-0 ovn_controller[149335]: + sudo -E kolla_set_configs
Nov 26 12:45:10 compute-0 podman[149323]: 2025-11-26 12:45:10.269692274 +0000 UTC m=+0.144354342 container start 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 12:45:10 compute-0 edpm-start-podman-container[149323]: ovn_controller
Nov 26 12:45:10 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 26 12:45:10 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 26 12:45:10 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 26 12:45:10 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 26 12:45:10 compute-0 systemd[149363]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 26 12:45:10 compute-0 edpm-start-podman-container[149322]: Creating additional drop-in dependency for "ovn_controller" (4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0)
Nov 26 12:45:10 compute-0 podman[149342]: 2025-11-26 12:45:10.345392117 +0000 UTC m=+0.064299059 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:45:10 compute-0 systemd[1]: 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0-378689c647c93ae2.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 12:45:10 compute-0 systemd[1]: 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0-378689c647c93ae2.service: Failed with result 'exit-code'.
Nov 26 12:45:10 compute-0 systemd[1]: Reloading.
Nov 26 12:45:10 compute-0 systemd[149363]: Queued start job for default target Main User Target.
Nov 26 12:45:10 compute-0 systemd[149363]: Created slice User Application Slice.
Nov 26 12:45:10 compute-0 systemd[149363]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 26 12:45:10 compute-0 systemd[149363]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 12:45:10 compute-0 systemd[149363]: Reached target Paths.
Nov 26 12:45:10 compute-0 systemd[149363]: Reached target Timers.
Nov 26 12:45:10 compute-0 systemd[149363]: Starting D-Bus User Message Bus Socket...
Nov 26 12:45:10 compute-0 systemd[149363]: Starting Create User's Volatile Files and Directories...
Nov 26 12:45:10 compute-0 systemd-sysv-generator[149415]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:45:10 compute-0 systemd[149363]: Listening on D-Bus User Message Bus Socket.
Nov 26 12:45:10 compute-0 systemd[149363]: Reached target Sockets.
Nov 26 12:45:10 compute-0 systemd-rc-local-generator[149410]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:45:10 compute-0 systemd[149363]: Finished Create User's Volatile Files and Directories.
Nov 26 12:45:10 compute-0 systemd[149363]: Reached target Basic System.
Nov 26 12:45:10 compute-0 systemd[149363]: Reached target Main User Target.
Nov 26 12:45:10 compute-0 systemd[149363]: Startup finished in 129ms.
Nov 26 12:45:10 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 26 12:45:10 compute-0 systemd[1]: Started ovn_controller container.
Nov 26 12:45:10 compute-0 systemd[1]: Started Session c1 of User root.
Nov 26 12:45:10 compute-0 sudo[149281]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:10 compute-0 ovn_controller[149335]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 12:45:10 compute-0 ovn_controller[149335]: INFO:__main__:Validating config file
Nov 26 12:45:10 compute-0 ovn_controller[149335]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 12:45:10 compute-0 ovn_controller[149335]: INFO:__main__:Writing out command to execute
Nov 26 12:45:10 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 26 12:45:10 compute-0 ovn_controller[149335]: ++ cat /run_command
Nov 26 12:45:10 compute-0 ovn_controller[149335]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 26 12:45:10 compute-0 ovn_controller[149335]: + ARGS=
Nov 26 12:45:10 compute-0 ovn_controller[149335]: + sudo kolla_copy_cacerts
Nov 26 12:45:10 compute-0 systemd[1]: Started Session c2 of User root.
Nov 26 12:45:10 compute-0 ovn_controller[149335]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 26 12:45:10 compute-0 ovn_controller[149335]: + [[ ! -n '' ]]
Nov 26 12:45:10 compute-0 ovn_controller[149335]: + . kolla_extend_start
Nov 26 12:45:10 compute-0 ovn_controller[149335]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 26 12:45:10 compute-0 ovn_controller[149335]: + umask 0022
Nov 26 12:45:10 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 26 12:45:10 compute-0 ovn_controller[149335]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 26 12:45:10 compute-0 NetworkManager[49024]: <info>  [1764161110.7585] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 26 12:45:10 compute-0 NetworkManager[49024]: <info>  [1764161110.7590] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 12:45:10 compute-0 NetworkManager[49024]: <info>  [1764161110.7599] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 26 12:45:10 compute-0 NetworkManager[49024]: <info>  [1764161110.7606] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 26 12:45:10 compute-0 NetworkManager[49024]: <info>  [1764161110.7608] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 26 12:45:10 compute-0 kernel: br-int: entered promiscuous mode
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 12:45:10 compute-0 ovn_controller[149335]: 2025-11-26T12:45:10Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 12:45:10 compute-0 NetworkManager[49024]: <info>  [1764161110.7765] manager: (ovn-69681b-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 26 12:45:10 compute-0 systemd-udevd[149497]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 12:45:10 compute-0 systemd-udevd[149498]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 12:45:10 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 26 12:45:10 compute-0 NetworkManager[49024]: <info>  [1764161110.7957] device (genev_sys_6081): carrier: link connected
Nov 26 12:45:10 compute-0 NetworkManager[49024]: <info>  [1764161110.7959] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 26 12:45:10 compute-0 sudo[149597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgnvnuwiuqgmgqdlwfdmdwwkoetcpaor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161110.7714117-609-57271791323967/AnsiballZ_command.py'
Nov 26 12:45:10 compute-0 sudo[149597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:45:11 compute-0 python3.9[149599]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:45:11 compute-0 ceph-mon[74966]: pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:11 compute-0 ovs-vsctl[149600]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 26 12:45:11 compute-0 sudo[149597]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:11 compute-0 sudo[149750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfgzzkmkgefiktlemyqmwsshbfoqggdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161111.2740967-617-149354060978746/AnsiballZ_command.py'
Nov 26 12:45:11 compute-0 sudo[149750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:11 compute-0 python3.9[149752]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:45:11 compute-0 ovs-vsctl[149754]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 26 12:45:11 compute-0 sudo[149750]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:12 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:12 compute-0 sudo[149905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvowfnlolxqqcetomltzghjnoxoittzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161111.9177327-631-29537488271051/AnsiballZ_command.py'
Nov 26 12:45:12 compute-0 sudo[149905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:12 compute-0 python3.9[149907]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:45:12 compute-0 ovs-vsctl[149908]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 26 12:45:12 compute-0 sudo[149905]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:12 compute-0 sshd-session[137956]: Connection closed by 192.168.122.30 port 58104
Nov 26 12:45:12 compute-0 sshd-session[137953]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:45:12 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Nov 26 12:45:12 compute-0 systemd[1]: session-45.scope: Consumed 44.799s CPU time.
Nov 26 12:45:12 compute-0 systemd-logind[777]: Session 45 logged out. Waiting for processes to exit.
Nov 26 12:45:12 compute-0 systemd-logind[777]: Removed session 45.
Nov 26 12:45:13 compute-0 ceph-mon[74966]: pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:14 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:15 compute-0 ceph-mon[74966]: pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:45:16 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:17 compute-0 ceph-mon[74966]: pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:17 compute-0 sshd-session[149933]: Accepted publickey for zuul from 192.168.122.30 port 33762 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:45:17 compute-0 systemd-logind[777]: New session 47 of user zuul.
Nov 26 12:45:17 compute-0 systemd[1]: Started Session 47 of User zuul.
Nov 26 12:45:17 compute-0 sshd-session[149933]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:45:18 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:18 compute-0 python3.9[150086]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:45:19 compute-0 ceph-mon[74966]: pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:19 compute-0 sudo[150240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnhduiqmnphsfctejlckctwdydhuyujs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161119.1800773-34-156338694029401/AnsiballZ_file.py'
Nov 26 12:45:19 compute-0 sudo[150240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:19 compute-0 python3.9[150242]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:19 compute-0 sudo[150240]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:19 compute-0 sudo[150392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlzbdisjrvulyuqhdbbztfposcignoyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161119.7921493-34-33538889782205/AnsiballZ_file.py'
Nov 26 12:45:19 compute-0 sudo[150392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:20 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:20 compute-0 python3.9[150394]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:20 compute-0 sudo[150392]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:20 compute-0 sudo[150544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywrsxbznutqtkqomdfqwbegkyfkxosfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161120.2575095-34-266618508649338/AnsiballZ_file.py'
Nov 26 12:45:20 compute-0 sudo[150544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:20 compute-0 python3.9[150546]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:20 compute-0 sudo[150544]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:20 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 26 12:45:20 compute-0 systemd[149363]: Activating special unit Exit the Session...
Nov 26 12:45:20 compute-0 systemd[149363]: Stopped target Main User Target.
Nov 26 12:45:20 compute-0 systemd[149363]: Stopped target Basic System.
Nov 26 12:45:20 compute-0 systemd[149363]: Stopped target Paths.
Nov 26 12:45:20 compute-0 systemd[149363]: Stopped target Sockets.
Nov 26 12:45:20 compute-0 systemd[149363]: Stopped target Timers.
Nov 26 12:45:20 compute-0 systemd[149363]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 26 12:45:20 compute-0 systemd[149363]: Closed D-Bus User Message Bus Socket.
Nov 26 12:45:20 compute-0 systemd[149363]: Stopped Create User's Volatile Files and Directories.
Nov 26 12:45:20 compute-0 systemd[149363]: Removed slice User Application Slice.
Nov 26 12:45:20 compute-0 systemd[149363]: Reached target Shutdown.
Nov 26 12:45:20 compute-0 systemd[149363]: Finished Exit the Session.
Nov 26 12:45:20 compute-0 systemd[149363]: Reached target Exit the Session.
Nov 26 12:45:20 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 26 12:45:20 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 26 12:45:20 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 26 12:45:20 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 26 12:45:20 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 26 12:45:20 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 26 12:45:20 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 26 12:45:20 compute-0 sudo[150697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsuqzktghfstxhjaciwsqalbaqkuuhgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161120.706199-34-102841784010037/AnsiballZ_file.py'
Nov 26 12:45:20 compute-0 sudo[150697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:45:21 compute-0 python3.9[150699]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:21 compute-0 sudo[150697]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:21 compute-0 ceph-mon[74966]: pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:21 compute-0 sudo[150849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqpzshpundmlhrffncojjeopbqzpixsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161121.1958663-34-59289380223149/AnsiballZ_file.py'
Nov 26 12:45:21 compute-0 sudo[150849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:21 compute-0 python3.9[150851]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:21 compute-0 sudo[150849]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:22 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:22 compute-0 python3.9[151001]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:45:22 compute-0 sudo[151151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldcpznedmubuxzrmsbtmvdadfuebzyco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161122.2559347-78-263527115176883/AnsiballZ_seboolean.py'
Nov 26 12:45:22 compute-0 sudo[151151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:22 compute-0 python3.9[151153]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 26 12:45:23 compute-0 ceph-mon[74966]: pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:23 compute-0 sudo[151151]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:23 compute-0 python3.9[151303]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:45:24 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:24 compute-0 python3.9[151425]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161123.4477775-86-277372435760151/.source follow=False _original_basename=haproxy.j2 checksum=deae64da24ad28f71dc47276f2e9f268f19a4519 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:25 compute-0 python3.9[151575]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:45:25 compute-0 ceph-mon[74966]: pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:25 compute-0 python3.9[151696]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161124.841485-101-121181581287861/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:25 compute-0 sudo[151846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqyhbzxmnuplfozmpmevynmrddwbpojb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161125.7390568-118-269229587684843/AnsiballZ_setup.py'
Nov 26 12:45:25 compute-0 sudo[151846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:45:26 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:26 compute-0 python3.9[151848]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:45:26 compute-0 sudo[151846]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:26 compute-0 sudo[151930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnzicogetexipspnbllalzgxrpwecave ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161125.7390568-118-269229587684843/AnsiballZ_dnf.py'
Nov 26 12:45:26 compute-0 sudo[151930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:26 compute-0 python3.9[151932]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:45:27 compute-0 ceph-mon[74966]: pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:27 compute-0 sudo[151930]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:28 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:28 compute-0 sudo[152083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qohirkzoaaavxliqnkjzfxfdrvxfmdoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161128.001988-130-6629036095137/AnsiballZ_systemd.py'
Nov 26 12:45:28 compute-0 sudo[152083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:28 compute-0 python3.9[152085]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 12:45:28 compute-0 sudo[152083]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:29 compute-0 python3.9[152239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:45:29 compute-0 ceph-mon[74966]: pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:30 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:30 compute-0 python3.9[152361]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161128.8275735-138-72786503374940/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:30 compute-0 python3.9[152511]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:45:30 compute-0 python3.9[152632]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161130.1160903-138-89717683808497/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:45:31 compute-0 ceph-mon[74966]: pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:31 compute-0 python3.9[152782]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:45:31 compute-0 python3.9[152903]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161131.3148656-182-172325717945183/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:32 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:32 compute-0 python3.9[153053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:45:32 compute-0 python3.9[153174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161132.0942104-182-215787693767612/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:33 compute-0 ceph-mon[74966]: pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:33 compute-0 python3.9[153324]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:45:33 compute-0 sudo[153476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghmxjbckbfqftkyxzhwcnozdjjcvyyjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161133.3708043-220-164096268639696/AnsiballZ_file.py'
Nov 26 12:45:33 compute-0 sudo[153476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:33 compute-0 python3.9[153478]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:33 compute-0 sudo[153476]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:34 compute-0 sudo[153628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiqpxtfjthsdhdafnogbzqhuwfpybewp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161133.8336294-228-234692001597064/AnsiballZ_stat.py'
Nov 26 12:45:34 compute-0 sudo[153628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:34 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:34 compute-0 python3.9[153630]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:45:34 compute-0 sudo[153628]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:34 compute-0 sudo[153706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvgmpumscryareyyshtbptlgknzmarnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161133.8336294-228-234692001597064/AnsiballZ_file.py'
Nov 26 12:45:34 compute-0 sudo[153706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:34 compute-0 python3.9[153708]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:34 compute-0 sudo[153706]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:34 compute-0 sudo[153858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avpjnqkhakyeypzayilvugknieaanprd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161134.6184335-228-194744156631483/AnsiballZ_stat.py'
Nov 26 12:45:34 compute-0 sudo[153858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:34 compute-0 python3.9[153860]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:45:34 compute-0 sudo[153858]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:35 compute-0 sudo[153936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmekbgwhvflkxbmtrcumobbemaohebab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161134.6184335-228-194744156631483/AnsiballZ_file.py'
Nov 26 12:45:35 compute-0 sudo[153936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:35 compute-0 ceph-mon[74966]: pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:35 compute-0 python3.9[153938]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:35 compute-0 sudo[153936]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:35 compute-0 sudo[154088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgmkszutcqugjdghckcaehpqetysneqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161135.4260452-251-142128757217572/AnsiballZ_file.py'
Nov 26 12:45:35 compute-0 sudo[154088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:35 compute-0 python3.9[154090]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:45:35 compute-0 sudo[154088]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:45:35
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'volumes']
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:45:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:45:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:45:36 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:36 compute-0 sudo[154240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-henaovlunijdrxyemmfuvywdeexmuyxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161135.862488-259-200804573148120/AnsiballZ_stat.py'
Nov 26 12:45:36 compute-0 sudo[154240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:36 compute-0 python3.9[154242]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:45:36 compute-0 sudo[154240]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:36 compute-0 sudo[154318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkwwkudivjmqglwfwyqufkspsyswxjwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161135.862488-259-200804573148120/AnsiballZ_file.py'
Nov 26 12:45:36 compute-0 sudo[154318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:36 compute-0 python3.9[154320]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:45:36 compute-0 sudo[154318]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:36 compute-0 sudo[154470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuauoszvptsrgtcyfujsrwgreeibqhsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161136.6771226-271-248049811011095/AnsiballZ_stat.py'
Nov 26 12:45:36 compute-0 sudo[154470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:37 compute-0 python3.9[154472]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:45:37 compute-0 sudo[154470]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:37 compute-0 sudo[154548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awsichhpeiskpucezyszvmzxqgbdwzzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161136.6771226-271-248049811011095/AnsiballZ_file.py'
Nov 26 12:45:37 compute-0 sudo[154548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:37 compute-0 ceph-mon[74966]: pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:37 compute-0 python3.9[154550]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:45:37 compute-0 sudo[154548]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:37 compute-0 sudo[154700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srftqxavbhblabjslljhtbznfrzptkas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161137.4505122-283-179759531703671/AnsiballZ_systemd.py'
Nov 26 12:45:37 compute-0 sudo[154700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:37 compute-0 python3.9[154702]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:45:37 compute-0 systemd[1]: Reloading.
Nov 26 12:45:37 compute-0 systemd-rc-local-generator[154725]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:45:37 compute-0 systemd-sysv-generator[154728]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:45:38 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:38 compute-0 sudo[154700]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:38 compute-0 sudo[154743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:45:38 compute-0 sudo[154743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:38 compute-0 sudo[154743]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:38 compute-0 sudo[154789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:45:38 compute-0 sudo[154789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:38 compute-0 sudo[154789]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:38 compute-0 sudo[154835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:45:38 compute-0 sudo[154835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:38 compute-0 sudo[154835]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:38 compute-0 sudo[154885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:45:38 compute-0 sudo[154885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:38 compute-0 sudo[154991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtvmxnveerayghisgzlwisdoyrwooqdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161138.277445-291-79578023119307/AnsiballZ_stat.py'
Nov 26 12:45:38 compute-0 sudo[154991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:38 compute-0 python3.9[154998]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:45:38 compute-0 sudo[154991]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:38 compute-0 sudo[154885]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:38 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:45:38 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:45:38 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:45:38 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:45:38 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:45:38 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:45:38 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 61b6c5b2-25a2-43ab-b8c6-75cc96a67ede does not exist
Nov 26 12:45:38 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 11ba02a1-5d8e-4791-a340-e642c3bc4467 does not exist
Nov 26 12:45:38 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 9682eea2-873f-490f-b2d2-af8d9b33b46a does not exist
Nov 26 12:45:38 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:45:38 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:45:38 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:45:38 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:45:38 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:45:38 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:45:38 compute-0 sudo[155048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:45:38 compute-0 sudo[155048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:38 compute-0 sudo[155048]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:38 compute-0 sudo[155137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xavwcfzklxgnithbzlhsjtbjgtqkpohz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161138.277445-291-79578023119307/AnsiballZ_file.py'
Nov 26 12:45:38 compute-0 sudo[155137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:38 compute-0 sudo[155103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:45:38 compute-0 sudo[155103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:38 compute-0 sudo[155103]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:38 compute-0 sudo[155148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:45:38 compute-0 sudo[155148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:38 compute-0 sudo[155148]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:38 compute-0 sudo[155173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:45:38 compute-0 sudo[155173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:38 compute-0 python3.9[155145]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:45:38 compute-0 sudo[155137]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:39 compute-0 podman[155274]: 2025-11-26 12:45:39.150964665 +0000 UTC m=+0.029123228 container create 14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 26 12:45:39 compute-0 systemd[1]: Started libpod-conmon-14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd.scope.
Nov 26 12:45:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:45:39 compute-0 podman[155274]: 2025-11-26 12:45:39.208023608 +0000 UTC m=+0.086182181 container init 14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 12:45:39 compute-0 ceph-mon[74966]: pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:45:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:45:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:45:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:45:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:45:39 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:45:39 compute-0 podman[155274]: 2025-11-26 12:45:39.214609651 +0000 UTC m=+0.092768214 container start 14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 12:45:39 compute-0 podman[155274]: 2025-11-26 12:45:39.215635343 +0000 UTC m=+0.093793896 container attach 14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:45:39 compute-0 beautiful_chaplygin[155316]: 167 167
Nov 26 12:45:39 compute-0 systemd[1]: libpod-14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd.scope: Deactivated successfully.
Nov 26 12:45:39 compute-0 conmon[155316]: conmon 14a644ea5c943545d673 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd.scope/container/memory.events
Nov 26 12:45:39 compute-0 podman[155274]: 2025-11-26 12:45:39.219587211 +0000 UTC m=+0.097745764 container died 14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e59acd8c946a00c7559c77f7fb4e98986f5b8899d2aeab0bc90b3f771dd3995a-merged.mount: Deactivated successfully.
Nov 26 12:45:39 compute-0 podman[155274]: 2025-11-26 12:45:39.140098957 +0000 UTC m=+0.018257530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:45:39 compute-0 podman[155274]: 2025-11-26 12:45:39.244228189 +0000 UTC m=+0.122386742 container remove 14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:45:39 compute-0 systemd[1]: libpod-conmon-14a644ea5c943545d673e10d87c96ca431d3f8c558e853b07ed837635cc633bd.scope: Deactivated successfully.
Nov 26 12:45:39 compute-0 sudo[155409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxylgokcpkicyjvqtdilmopqmrzsalax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161139.1167393-303-255618730664054/AnsiballZ_stat.py'
Nov 26 12:45:39 compute-0 sudo[155409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:39 compute-0 podman[155412]: 2025-11-26 12:45:39.368925624 +0000 UTC m=+0.031498903 container create b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 12:45:39 compute-0 systemd[1]: Started libpod-conmon-b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3.scope.
Nov 26 12:45:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/430cf909bbfb6cea5aec98e401c585242c07c923cf5ada4bc2d98d7a7d61d6db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/430cf909bbfb6cea5aec98e401c585242c07c923cf5ada4bc2d98d7a7d61d6db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/430cf909bbfb6cea5aec98e401c585242c07c923cf5ada4bc2d98d7a7d61d6db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/430cf909bbfb6cea5aec98e401c585242c07c923cf5ada4bc2d98d7a7d61d6db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/430cf909bbfb6cea5aec98e401c585242c07c923cf5ada4bc2d98d7a7d61d6db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:45:39 compute-0 podman[155412]: 2025-11-26 12:45:39.427926056 +0000 UTC m=+0.090499336 container init b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:45:39 compute-0 podman[155412]: 2025-11-26 12:45:39.435982951 +0000 UTC m=+0.098556230 container start b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:45:39 compute-0 podman[155412]: 2025-11-26 12:45:39.437861392 +0000 UTC m=+0.100434670 container attach b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 12:45:39 compute-0 podman[155412]: 2025-11-26 12:45:39.356323334 +0000 UTC m=+0.018896613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:45:39 compute-0 python3.9[155415]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:45:39 compute-0 sudo[155409]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:39 compute-0 sudo[155507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypyiilzcdzsfehzoqnsuratsqheqhqod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161139.1167393-303-255618730664054/AnsiballZ_file.py'
Nov 26 12:45:39 compute-0 sudo[155507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:39 compute-0 python3.9[155509]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:45:39 compute-0 sudo[155507]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:40 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:40 compute-0 sudo[155673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylctzjvwtniowczegarrhzpradbzisxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161139.95006-315-57104152064949/AnsiballZ_systemd.py'
Nov 26 12:45:40 compute-0 sudo[155673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:40 compute-0 modest_mclaren[155427]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:45:40 compute-0 modest_mclaren[155427]: --> relative data size: 1.0
Nov 26 12:45:40 compute-0 modest_mclaren[155427]: --> All data devices are unavailable
Nov 26 12:45:40 compute-0 systemd[1]: libpod-b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3.scope: Deactivated successfully.
Nov 26 12:45:40 compute-0 podman[155412]: 2025-11-26 12:45:40.263843899 +0000 UTC m=+0.926417178 container died b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:45:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-430cf909bbfb6cea5aec98e401c585242c07c923cf5ada4bc2d98d7a7d61d6db-merged.mount: Deactivated successfully.
Nov 26 12:45:40 compute-0 podman[155412]: 2025-11-26 12:45:40.298105638 +0000 UTC m=+0.960678917 container remove b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 12:45:40 compute-0 systemd[1]: libpod-conmon-b868a5ddaf569f591cd3c793acbb9ada4ac4bccac591adf9850f44fc437351c3.scope: Deactivated successfully.
Nov 26 12:45:40 compute-0 sudo[155173]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:40 compute-0 sudo[155695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:45:40 compute-0 sudo[155695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:40 compute-0 sudo[155695]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:40 compute-0 python3.9[155675]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:45:40 compute-0 systemd[1]: Reloading.
Nov 26 12:45:40 compute-0 ovn_controller[149335]: 2025-11-26T12:45:40Z|00025|memory|INFO|16128 kB peak resident set size after 29.7 seconds
Nov 26 12:45:40 compute-0 ovn_controller[149335]: 2025-11-26T12:45:40Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 26 12:45:40 compute-0 podman[155719]: 2025-11-26 12:45:40.487223892 +0000 UTC m=+0.098817092 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 26 12:45:40 compute-0 systemd-sysv-generator[155791]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:45:40 compute-0 systemd-rc-local-generator[155788]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:45:40 compute-0 sudo[155727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:45:40 compute-0 sudo[155727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:40 compute-0 sudo[155727]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:40 compute-0 systemd[1]: Starting Create netns directory...
Nov 26 12:45:40 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 12:45:40 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 12:45:40 compute-0 systemd[1]: Finished Create netns directory.
Nov 26 12:45:40 compute-0 sudo[155806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:45:40 compute-0 sudo[155806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:40 compute-0 sudo[155806]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:40 compute-0 sudo[155673]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:40 compute-0 sudo[155835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:45:40 compute-0 sudo[155835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:45:41 compute-0 podman[155967]: 2025-11-26 12:45:41.000570236 +0000 UTC m=+0.030150382 container create c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 12:45:41 compute-0 systemd[1]: Started libpod-conmon-c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455.scope.
Nov 26 12:45:41 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:45:41 compute-0 podman[155967]: 2025-11-26 12:45:41.054157957 +0000 UTC m=+0.083738104 container init c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 12:45:41 compute-0 podman[155967]: 2025-11-26 12:45:41.058976378 +0000 UTC m=+0.088556525 container start c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:45:41 compute-0 podman[155967]: 2025-11-26 12:45:41.062402424 +0000 UTC m=+0.091982581 container attach c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 12:45:41 compute-0 heuristic_leakey[156007]: 167 167
Nov 26 12:45:41 compute-0 podman[155967]: 2025-11-26 12:45:41.063090712 +0000 UTC m=+0.092670848 container died c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:45:41 compute-0 systemd[1]: libpod-c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455.scope: Deactivated successfully.
Nov 26 12:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4700f2ad6f949246b3e687a2f29172c42f1f58d78d1a289a60cbd13a59590c0-merged.mount: Deactivated successfully.
Nov 26 12:45:41 compute-0 podman[155967]: 2025-11-26 12:45:41.084099064 +0000 UTC m=+0.113679212 container remove c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_leakey, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:45:41 compute-0 podman[155967]: 2025-11-26 12:45:40.989148069 +0000 UTC m=+0.018728236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:45:41 compute-0 systemd[1]: libpod-conmon-c2f2bcd8bf0eb4a86174efe675fd3c29f3a9f743f420b515617b81e2c3f61455.scope: Deactivated successfully.
Nov 26 12:45:41 compute-0 sudo[156069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpwzekolofmjxwubyfbduuozmcttyczs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161140.9082215-325-267507558383877/AnsiballZ_file.py'
Nov 26 12:45:41 compute-0 sudo[156069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:41 compute-0 podman[156077]: 2025-11-26 12:45:41.209105332 +0000 UTC m=+0.029120223 container create 80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 12:45:41 compute-0 ceph-mon[74966]: pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:41 compute-0 systemd[1]: Started libpod-conmon-80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031.scope.
Nov 26 12:45:41 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2b1b476f53562a8b3f452d94c26e7805b9ff9d714e45ef3747fe638e1c955f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2b1b476f53562a8b3f452d94c26e7805b9ff9d714e45ef3747fe638e1c955f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2b1b476f53562a8b3f452d94c26e7805b9ff9d714e45ef3747fe638e1c955f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2b1b476f53562a8b3f452d94c26e7805b9ff9d714e45ef3747fe638e1c955f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:45:41 compute-0 python3.9[156071]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:41 compute-0 podman[156077]: 2025-11-26 12:45:41.276083478 +0000 UTC m=+0.096098359 container init 80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 12:45:41 compute-0 podman[156077]: 2025-11-26 12:45:41.281750568 +0000 UTC m=+0.101765450 container start 80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 12:45:41 compute-0 podman[156077]: 2025-11-26 12:45:41.283225488 +0000 UTC m=+0.103240369 container attach 80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 12:45:41 compute-0 podman[156077]: 2025-11-26 12:45:41.196943812 +0000 UTC m=+0.016958713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:45:41 compute-0 sudo[156069]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:41 compute-0 sudo[156244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyyeljfmfspeeyvaemoinxhpypgspkae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161141.4068146-333-180842987639987/AnsiballZ_stat.py'
Nov 26 12:45:41 compute-0 sudo[156244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:41 compute-0 python3.9[156246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:45:41 compute-0 sudo[156244]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:41 compute-0 tender_albattani[156090]: {
Nov 26 12:45:41 compute-0 tender_albattani[156090]:     "0": [
Nov 26 12:45:41 compute-0 tender_albattani[156090]:         {
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "devices": [
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "/dev/loop3"
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             ],
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "lv_name": "ceph_lv0",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "lv_size": "21470642176",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "name": "ceph_lv0",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "tags": {
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.cluster_name": "ceph",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.crush_device_class": "",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.encrypted": "0",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.osd_id": "0",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.type": "block",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.vdo": "0"
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             },
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "type": "block",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "vg_name": "ceph_vg0"
Nov 26 12:45:41 compute-0 tender_albattani[156090]:         }
Nov 26 12:45:41 compute-0 tender_albattani[156090]:     ],
Nov 26 12:45:41 compute-0 tender_albattani[156090]:     "1": [
Nov 26 12:45:41 compute-0 tender_albattani[156090]:         {
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "devices": [
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "/dev/loop4"
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             ],
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "lv_name": "ceph_lv1",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "lv_size": "21470642176",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "name": "ceph_lv1",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "tags": {
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.cluster_name": "ceph",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.crush_device_class": "",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.encrypted": "0",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.osd_id": "1",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.type": "block",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.vdo": "0"
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             },
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "type": "block",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "vg_name": "ceph_vg1"
Nov 26 12:45:41 compute-0 tender_albattani[156090]:         }
Nov 26 12:45:41 compute-0 tender_albattani[156090]:     ],
Nov 26 12:45:41 compute-0 tender_albattani[156090]:     "2": [
Nov 26 12:45:41 compute-0 tender_albattani[156090]:         {
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "devices": [
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "/dev/loop5"
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             ],
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "lv_name": "ceph_lv2",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "lv_size": "21470642176",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "name": "ceph_lv2",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "tags": {
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.cluster_name": "ceph",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.crush_device_class": "",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.encrypted": "0",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.osd_id": "2",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.type": "block",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:                 "ceph.vdo": "0"
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             },
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "type": "block",
Nov 26 12:45:41 compute-0 tender_albattani[156090]:             "vg_name": "ceph_vg2"
Nov 26 12:45:41 compute-0 tender_albattani[156090]:         }
Nov 26 12:45:41 compute-0 tender_albattani[156090]:     ]
Nov 26 12:45:41 compute-0 tender_albattani[156090]: }
Nov 26 12:45:41 compute-0 systemd[1]: libpod-80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031.scope: Deactivated successfully.
Nov 26 12:45:41 compute-0 podman[156077]: 2025-11-26 12:45:41.920879678 +0000 UTC m=+0.740894549 container died 80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b2b1b476f53562a8b3f452d94c26e7805b9ff9d714e45ef3747fe638e1c955f-merged.mount: Deactivated successfully.
Nov 26 12:45:41 compute-0 podman[156077]: 2025-11-26 12:45:41.955313105 +0000 UTC m=+0.775327987 container remove 80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 12:45:41 compute-0 systemd[1]: libpod-conmon-80c223153adf5cd2b0d252881069c998efc3ea61f663da386d2c3c8ebaad0031.scope: Deactivated successfully.
Nov 26 12:45:41 compute-0 sudo[155835]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:42 compute-0 sudo[156383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgakihdpvcjdrowxpeatsenvaeogbfdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161141.4068146-333-180842987639987/AnsiballZ_copy.py'
Nov 26 12:45:42 compute-0 sudo[156383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:42 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:42 compute-0 sudo[156382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:45:42 compute-0 sudo[156382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:42 compute-0 sudo[156382]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:42 compute-0 sudo[156410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:45:42 compute-0 sudo[156410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:42 compute-0 sudo[156410]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:42 compute-0 sudo[156435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:45:42 compute-0 sudo[156435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:42 compute-0 sudo[156435]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:42 compute-0 sudo[156460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:45:42 compute-0 sudo[156460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:42 compute-0 python3.9[156400]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161141.4068146-333-180842987639987/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:42 compute-0 sudo[156383]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:42 compute-0 podman[156540]: 2025-11-26 12:45:42.391489806 +0000 UTC m=+0.027686152 container create 03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:45:42 compute-0 systemd[1]: Started libpod-conmon-03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c.scope.
Nov 26 12:45:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:45:42 compute-0 podman[156540]: 2025-11-26 12:45:42.446040769 +0000 UTC m=+0.082237115 container init 03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:45:42 compute-0 podman[156540]: 2025-11-26 12:45:42.450589372 +0000 UTC m=+0.086785718 container start 03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:45:42 compute-0 podman[156540]: 2025-11-26 12:45:42.45167072 +0000 UTC m=+0.087867066 container attach 03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:45:42 compute-0 pensive_mayer[156553]: 167 167
Nov 26 12:45:42 compute-0 systemd[1]: libpod-03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c.scope: Deactivated successfully.
Nov 26 12:45:42 compute-0 conmon[156553]: conmon 03d68ddb6bc9274b5e60 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c.scope/container/memory.events
Nov 26 12:45:42 compute-0 podman[156540]: 2025-11-26 12:45:42.455569258 +0000 UTC m=+0.091765604 container died 03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e273a87e39c1d96a9fb56585217f31f40c70045bd455fd408c4cf00223ed0dba-merged.mount: Deactivated successfully.
Nov 26 12:45:42 compute-0 podman[156540]: 2025-11-26 12:45:42.471552893 +0000 UTC m=+0.107749238 container remove 03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:45:42 compute-0 podman[156540]: 2025-11-26 12:45:42.379884444 +0000 UTC m=+0.016080810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:45:42 compute-0 systemd[1]: libpod-conmon-03d68ddb6bc9274b5e60a42b0403c694af4abca5bd92c140b05b9845defa572c.scope: Deactivated successfully.
Nov 26 12:45:42 compute-0 podman[156598]: 2025-11-26 12:45:42.588966694 +0000 UTC m=+0.027916826 container create f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 12:45:42 compute-0 systemd[1]: Started libpod-conmon-f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8.scope.
Nov 26 12:45:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96761291be7dfdea5367dc202a0a91abf0b4e1de4a8bb4e140a0d8689ce86fa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96761291be7dfdea5367dc202a0a91abf0b4e1de4a8bb4e140a0d8689ce86fa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96761291be7dfdea5367dc202a0a91abf0b4e1de4a8bb4e140a0d8689ce86fa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96761291be7dfdea5367dc202a0a91abf0b4e1de4a8bb4e140a0d8689ce86fa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:45:42 compute-0 podman[156598]: 2025-11-26 12:45:42.642511912 +0000 UTC m=+0.081462064 container init f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:45:42 compute-0 podman[156598]: 2025-11-26 12:45:42.64842666 +0000 UTC m=+0.087376792 container start f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 12:45:42 compute-0 podman[156598]: 2025-11-26 12:45:42.649435121 +0000 UTC m=+0.088385253 container attach f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 12:45:42 compute-0 podman[156598]: 2025-11-26 12:45:42.57809274 +0000 UTC m=+0.017042892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:45:42 compute-0 sudo[156719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txffbvqqrcfgwzajterrmrwtiibwldiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161142.54582-350-87477683450286/AnsiballZ_file.py'
Nov 26 12:45:42 compute-0 sudo[156719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:42 compute-0 python3.9[156721]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:45:42 compute-0 sudo[156719]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:43 compute-0 sudo[156872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruohifwfbivbexvbpdlmwwwqyoygpeso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161143.0413847-358-71446052308829/AnsiballZ_stat.py'
Nov 26 12:45:43 compute-0 sudo[156872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:43 compute-0 ceph-mon[74966]: pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:43 compute-0 python3.9[156875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:45:43 compute-0 sudo[156872]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]: {
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:         "osd_id": 1,
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:         "type": "bluestore"
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:     },
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:         "osd_id": 2,
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:         "type": "bluestore"
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:     },
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:         "osd_id": 0,
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:         "type": "bluestore"
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]:     }
Nov 26 12:45:43 compute-0 heuristic_lalande[156641]: }
Nov 26 12:45:43 compute-0 systemd[1]: libpod-f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8.scope: Deactivated successfully.
Nov 26 12:45:43 compute-0 conmon[156641]: conmon f0f078753f48766653bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8.scope/container/memory.events
Nov 26 12:45:43 compute-0 podman[156598]: 2025-11-26 12:45:43.433460067 +0000 UTC m=+0.872410269 container died f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 26 12:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-96761291be7dfdea5367dc202a0a91abf0b4e1de4a8bb4e140a0d8689ce86fa9-merged.mount: Deactivated successfully.
Nov 26 12:45:43 compute-0 podman[156598]: 2025-11-26 12:45:43.466509695 +0000 UTC m=+0.905459828 container remove f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 12:45:43 compute-0 systemd[1]: libpod-conmon-f0f078753f48766653bcf46e7953004a22c0d1d52cd9fd227bbfdaf1a2f0c8e8.scope: Deactivated successfully.
Nov 26 12:45:43 compute-0 sudo[156460]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:45:43 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:45:43 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:45:43 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:45:43 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev c5023bd2-292f-43da-93b6-00d49925100e does not exist
Nov 26 12:45:43 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev ba7b5c30-9bb6-49b0-98f7-f47a3705e021 does not exist
Nov 26 12:45:43 compute-0 sudo[156972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:45:43 compute-0 sudo[156972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:43 compute-0 sudo[156972]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:43 compute-0 sudo[157015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:45:43 compute-0 sudo[157015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:45:43 compute-0 sudo[157015]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:43 compute-0 sudo[157083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyygswrbchyroqqylawfxrjfptciebqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161143.0413847-358-71446052308829/AnsiballZ_copy.py'
Nov 26 12:45:43 compute-0 sudo[157083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:43 compute-0 python3.9[157085]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161143.0413847-358-71446052308829/.source.json _original_basename=.e4q8diq7 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:45:43 compute-0 sudo[157083]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:44 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:44 compute-0 sudo[157235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luwqmfrppnkxyevkelyonvawwxssjgzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161143.9171288-373-73234072008267/AnsiballZ_file.py'
Nov 26 12:45:44 compute-0 sudo[157235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:44 compute-0 python3.9[157237]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:45:44 compute-0 sudo[157235]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:44 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:45:44 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:45:44 compute-0 ceph-mon[74966]: pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:44 compute-0 sudo[157387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wejtmedfakaatphudlnnwzszkcfbkkbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161144.4196155-381-173455160078487/AnsiballZ_stat.py'
Nov 26 12:45:44 compute-0 sudo[157387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:44 compute-0 sudo[157387]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:44 compute-0 sudo[157510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltflxxkjgaswhfgfosaszgewpqettsxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161144.4196155-381-173455160078487/AnsiballZ_copy.py'
Nov 26 12:45:44 compute-0 sudo[157510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:45:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:45:45 compute-0 sudo[157510]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:45 compute-0 sudo[157662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewpnvkloozapqggsyfsvmowusmdhgiji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161145.3622923-398-200255009390521/AnsiballZ_container_config_data.py'
Nov 26 12:45:45 compute-0 sudo[157662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:45 compute-0 python3.9[157664]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 26 12:45:45 compute-0 sudo[157662]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:45:46 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:46 compute-0 sudo[157814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clkibtzeehdygwghqxaapyfjntbcjody ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161146.1093614-407-176079914190553/AnsiballZ_container_config_hash.py'
Nov 26 12:45:46 compute-0 sudo[157814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:46 compute-0 python3.9[157816]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 12:45:46 compute-0 sudo[157814]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:47 compute-0 ceph-mon[74966]: pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:47 compute-0 sudo[157966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmnmwpzjlxgxryzxhbvipqyxbwgczpfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161146.7756853-416-167960799130932/AnsiballZ_podman_container_info.py'
Nov 26 12:45:47 compute-0 sudo[157966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:47 compute-0 python3.9[157968]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 26 12:45:47 compute-0 sudo[157966]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:48 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:48 compute-0 sudo[158136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avjgtnpahjyyrmpqewyvwopxywkpwebp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764161147.8409145-429-17275928594156/AnsiballZ_edpm_container_manage.py'
Nov 26 12:45:48 compute-0 sudo[158136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:48 compute-0 python3[158138]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 12:45:49 compute-0 ceph-mon[74966]: pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:50 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:45:51 compute-0 ceph-mon[74966]: pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:52 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:53 compute-0 ceph-mon[74966]: pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:54 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:55 compute-0 ceph-mon[74966]: pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:45:56 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:56 compute-0 podman[158149]: 2025-11-26 12:45:56.537385797 +0000 UTC m=+8.105951629 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 26 12:45:56 compute-0 podman[158248]: 2025-11-26 12:45:56.638647477 +0000 UTC m=+0.029763568 container create 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 26 12:45:56 compute-0 podman[158248]: 2025-11-26 12:45:56.624554044 +0000 UTC m=+0.015670145 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 26 12:45:56 compute-0 python3[158138]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 26 12:45:56 compute-0 sudo[158136]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:57 compute-0 sudo[158424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uclporvqblkuijrvyhkcrhnimujzevgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161156.8630831-437-191095134506006/AnsiballZ_stat.py'
Nov 26 12:45:57 compute-0 sudo[158424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:57 compute-0 ceph-mon[74966]: pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:57 compute-0 python3.9[158426]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:45:57 compute-0 sudo[158424]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:57 compute-0 sudo[158578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcswcqkslcueuxhhlphvgqpzinbuzjbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161157.3788702-446-189045535751745/AnsiballZ_file.py'
Nov 26 12:45:57 compute-0 sudo[158578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:57 compute-0 python3.9[158580]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:45:57 compute-0 sudo[158578]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:57 compute-0 sudo[158654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orhmtswpzczqglceuhfbrwrpfrzlhchm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161157.3788702-446-189045535751745/AnsiballZ_stat.py'
Nov 26 12:45:57 compute-0 sudo[158654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:58 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:58 compute-0 python3.9[158656]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:45:58 compute-0 sudo[158654]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:58 compute-0 sudo[158805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpqvaxbxphmsnkmmmnbohmzgimlqrsqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161158.2123039-446-272438720783303/AnsiballZ_copy.py'
Nov 26 12:45:58 compute-0 sudo[158805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:58 compute-0 python3.9[158807]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764161158.2123039-446-272438720783303/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:45:58 compute-0 sudo[158805]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:58 compute-0 sudo[158881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tduvbprmwiynxtiokaqmvwzxbpbykdmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161158.2123039-446-272438720783303/AnsiballZ_systemd.py'
Nov 26 12:45:58 compute-0 sudo[158881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:59 compute-0 python3.9[158883]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 12:45:59 compute-0 systemd[1]: Reloading.
Nov 26 12:45:59 compute-0 ceph-mon[74966]: pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:45:59 compute-0 systemd-sysv-generator[158907]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:45:59 compute-0 systemd-rc-local-generator[158903]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:45:59 compute-0 sudo[158881]: pam_unix(sudo:session): session closed for user root
Nov 26 12:45:59 compute-0 sudo[158992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfwvfyfffkzxejxlnvkylhgqkmpdcrvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161158.2123039-446-272438720783303/AnsiballZ_systemd.py'
Nov 26 12:45:59 compute-0 sudo[158992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:45:59 compute-0 python3.9[158994]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:45:59 compute-0 systemd[1]: Reloading.
Nov 26 12:45:59 compute-0 systemd-rc-local-generator[159019]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:45:59 compute-0 systemd-sysv-generator[159022]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:45:59 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 26 12:46:00 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:46:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a095883700a37a5a884e5aec0798798d800e840306049128f2d208c384baa0/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 26 12:46:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a095883700a37a5a884e5aec0798798d800e840306049128f2d208c384baa0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 12:46:00 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff.
Nov 26 12:46:00 compute-0 podman[159035]: 2025-11-26 12:46:00.09441738 +0000 UTC m=+0.082748179 container init 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: + sudo -E kolla_set_configs
Nov 26 12:46:00 compute-0 podman[159035]: 2025-11-26 12:46:00.112548832 +0000 UTC m=+0.100879610 container start 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 26 12:46:00 compute-0 edpm-start-podman-container[159035]: ovn_metadata_agent
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: INFO:__main__:Validating config file
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: INFO:__main__:Copying service configuration files
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: INFO:__main__:Writing out command to execute
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 26 12:46:00 compute-0 podman[159055]: 2025-11-26 12:46:00.160348752 +0000 UTC m=+0.041162601 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: ++ cat /run_command
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: + CMD=neutron-ovn-metadata-agent
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: + ARGS=
Nov 26 12:46:00 compute-0 edpm-start-podman-container[159034]: Creating additional drop-in dependency for "ovn_metadata_agent" (5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff)
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: + sudo kolla_copy_cacerts
Nov 26 12:46:00 compute-0 systemd[1]: Reloading.
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: + [[ ! -n '' ]]
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: + . kolla_extend_start
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: + umask 0022
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: + exec neutron-ovn-metadata-agent
Nov 26 12:46:00 compute-0 ovn_metadata_agent[159048]: Running command: 'neutron-ovn-metadata-agent'
Nov 26 12:46:00 compute-0 systemd-sysv-generator[159115]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:46:00 compute-0 systemd-rc-local-generator[159112]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:46:00 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 26 12:46:00 compute-0 sudo[158992]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:00 compute-0 sshd-session[149936]: Connection closed by 192.168.122.30 port 33762
Nov 26 12:46:00 compute-0 sshd-session[149933]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:46:00 compute-0 systemd-logind[777]: Session 47 logged out. Waiting for processes to exit.
Nov 26 12:46:00 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Nov 26 12:46:00 compute-0 systemd[1]: session-47.scope: Consumed 40.972s CPU time.
Nov 26 12:46:00 compute-0 systemd-logind[777]: Removed session 47.
Nov 26 12:46:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:46:01 compute-0 ceph-mon[74966]: pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.679 159053 INFO neutron.common.config [-] Logging enabled!
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.679 159053 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.679 159053 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.680 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.680 159053 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.680 159053 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.680 159053 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.680 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.680 159053 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.680 159053 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.681 159053 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.682 159053 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.683 159053 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.684 159053 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.685 159053 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.686 159053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.687 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.688 159053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.689 159053 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.690 159053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.691 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.692 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.693 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.694 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.695 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.696 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.697 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.698 159053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.699 159053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.700 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.701 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.702 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.703 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.704 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.705 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.706 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.707 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.708 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.709 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.710 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.711 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.711 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.711 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.711 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.711 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.711 159053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.711 159053 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.718 159053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.719 159053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.719 159053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.719 159053 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.719 159053 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.729 159053 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 1a132c77-5dda-4b90-923d-26a448f3fef6 (UUID: 1a132c77-5dda-4b90-923d-26a448f3fef6) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.751 159053 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.751 159053 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.751 159053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.751 159053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.754 159053 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.759 159053 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.764 159053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '1a132c77-5dda-4b90-923d-26a448f3fef6'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f12de3dbbe0>], external_ids={}, name=1a132c77-5dda-4b90-923d-26a448f3fef6, nb_cfg_timestamp=1764161118780, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.765 159053 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f12de35ee80>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.765 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.765 159053 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.766 159053 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.766 159053 INFO oslo_service.service [-] Starting 1 workers
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.770 159053 DEBUG oslo_service.service [-] Started child 159155 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.773 159053 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp4ovitr4s/privsep.sock']
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.773 159155 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-429504'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.789 159155 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.790 159155 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.790 159155 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.792 159155 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.798 159155 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 26 12:46:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:01.802 159155 INFO eventlet.wsgi.server [-] (159155) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 26 12:46:02 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:02 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 26 12:46:02 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.299 159053 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 26 12:46:02 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.300 159053 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp4ovitr4s/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 26 12:46:02 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.219 159160 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 26 12:46:02 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.222 159160 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 26 12:46:02 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.224 159160 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 26 12:46:02 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.224 159160 INFO oslo.privsep.daemon [-] privsep daemon running as pid 159160
Nov 26 12:46:02 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.302 159160 DEBUG oslo.privsep.daemon [-] privsep: reply[6c02a2e4-c82b-4e22-8f1e-b054bc3d796f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 26 12:46:02 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.702 159160 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:46:02 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.702 159160 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:46:02 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:02.703 159160 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.141 159160 DEBUG oslo.privsep.daemon [-] privsep: reply[ad09efbc-dfbf-4b65-b3f1-b717b74bab22]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.144 159053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=1a132c77-5dda-4b90-923d-26a448f3fef6, column=external_ids, values=({'neutron:ovn-metadata-id': '4fda91b0-bfd4-5361-9fc0-dd5f70601ca4'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.151 159053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1a132c77-5dda-4b90-923d-26a448f3fef6, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.157 159053 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.157 159053 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.157 159053 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.157 159053 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.158 159053 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.159 159053 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.160 159053 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.161 159053 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.162 159053 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.163 159053 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.164 159053 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.165 159053 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.166 159053 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.167 159053 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.168 159053 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.169 159053 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.170 159053 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.171 159053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.172 159053 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.173 159053 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.174 159053 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.175 159053 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.176 159053 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.177 159053 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.178 159053 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ceph-mon[74966]: pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.179 159053 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.180 159053 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.181 159053 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.182 159053 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.183 159053 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.184 159053 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.185 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.186 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.187 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.188 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.189 159053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.189 159053 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.189 159053 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.189 159053 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.189 159053 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:46:03 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:46:03.189 159053 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 26 12:46:04 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:05 compute-0 ceph-mon[74966]: pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:46:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:46:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:46:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:46:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:46:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:46:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:46:06 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:06 compute-0 sshd-session[159165]: Accepted publickey for zuul from 192.168.122.30 port 46650 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:46:06 compute-0 systemd-logind[777]: New session 48 of user zuul.
Nov 26 12:46:06 compute-0 systemd[1]: Started Session 48 of User zuul.
Nov 26 12:46:06 compute-0 sshd-session[159165]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:46:07 compute-0 ceph-mon[74966]: pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:07 compute-0 python3.9[159318]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:46:08 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:08 compute-0 sudo[159472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqctbsfvsklqcplsvyolksmhsbgffqom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161167.7617037-34-97954589940031/AnsiballZ_command.py'
Nov 26 12:46:08 compute-0 sudo[159472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:08 compute-0 python3.9[159474]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:46:08 compute-0 sudo[159472]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:08 compute-0 sudo[159633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tumvulaypaikkqkxdnxugqjhnduzhnmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161168.5148075-45-23968667900674/AnsiballZ_systemd_service.py'
Nov 26 12:46:08 compute-0 sudo[159633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:09 compute-0 python3.9[159635]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 12:46:09 compute-0 systemd[1]: Reloading.
Nov 26 12:46:09 compute-0 ceph-mon[74966]: pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:09 compute-0 systemd-rc-local-generator[159656]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:46:09 compute-0 systemd-sysv-generator[159659]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:46:09 compute-0 sudo[159633]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:09 compute-0 python3.9[159820]: ansible-ansible.builtin.service_facts Invoked
Nov 26 12:46:09 compute-0 network[159837]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 12:46:09 compute-0 network[159838]: 'network-scripts' will be removed from distribution in near future.
Nov 26 12:46:09 compute-0 network[159839]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 12:46:10 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:10 compute-0 podman[159861]: 2025-11-26 12:46:10.7523643 +0000 UTC m=+0.065009186 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 12:46:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:46:11 compute-0 ceph-mon[74966]: pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:12 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:12 compute-0 sudo[160123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhclzeyonorkbcsiiswmzkasqxolrbkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161171.8751972-64-244431831643094/AnsiballZ_systemd_service.py'
Nov 26 12:46:12 compute-0 sudo[160123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:12 compute-0 python3.9[160125]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:46:12 compute-0 sudo[160123]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:12 compute-0 sudo[160276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrhfpmattvueqcikyeohojbasorijwof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161172.4335008-64-68219759755970/AnsiballZ_systemd_service.py'
Nov 26 12:46:12 compute-0 sudo[160276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:12 compute-0 python3.9[160278]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:46:12 compute-0 sudo[160276]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:13 compute-0 sudo[160429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puxevmrbrghqouvkowoepyoxijxlpbnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161172.9671493-64-49276594654548/AnsiballZ_systemd_service.py'
Nov 26 12:46:13 compute-0 sudo[160429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:13 compute-0 ceph-mon[74966]: pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:13 compute-0 python3.9[160431]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:46:13 compute-0 sudo[160429]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:13 compute-0 sudo[160582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbjyliecfksjeisbiyyfyevrwwdosudk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161173.5092723-64-88664188124468/AnsiballZ_systemd_service.py'
Nov 26 12:46:13 compute-0 sudo[160582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:13 compute-0 python3.9[160584]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:46:13 compute-0 sudo[160582]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:14 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:14 compute-0 sudo[160735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtxsocvgjplvcvhxvbuergyswgoslwzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161174.0474846-64-90430651654260/AnsiballZ_systemd_service.py'
Nov 26 12:46:14 compute-0 sudo[160735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:14 compute-0 python3.9[160737]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:46:14 compute-0 sudo[160735]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:14 compute-0 sudo[160888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyygmtkivkyllowkjmbvjglazansfixo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161174.5843744-64-180848627705714/AnsiballZ_systemd_service.py'
Nov 26 12:46:14 compute-0 sudo[160888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:15 compute-0 python3.9[160890]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:46:15 compute-0 sudo[160888]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:15 compute-0 ceph-mon[74966]: pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:15 compute-0 sudo[161041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tupxecajxazsghwpfefmogxnoqtjtbnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161175.1204853-64-184747397929302/AnsiballZ_systemd_service.py'
Nov 26 12:46:15 compute-0 sudo[161041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:15 compute-0 python3.9[161043]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:46:15 compute-0 sudo[161041]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:46:16 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:16 compute-0 sudo[161194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbtonutcglcwvfiutsyumxpaeicvjukw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161175.9020557-116-109902719525980/AnsiballZ_file.py'
Nov 26 12:46:16 compute-0 sudo[161194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:16 compute-0 python3.9[161196]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:46:16 compute-0 sudo[161194]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:16 compute-0 sudo[161346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aocadwncycvsjtiqnrawjnnqrrggujoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161176.441304-116-43508869949627/AnsiballZ_file.py'
Nov 26 12:46:16 compute-0 sudo[161346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:16 compute-0 python3.9[161348]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:46:16 compute-0 sudo[161346]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:17 compute-0 sudo[161498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwfzeemybfmrlpulbdjhkkblkmkhxbsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161176.8600934-116-102778706381804/AnsiballZ_file.py'
Nov 26 12:46:17 compute-0 sudo[161498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:17 compute-0 python3.9[161500]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:46:17 compute-0 sudo[161498]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:17 compute-0 ceph-mon[74966]: pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:17 compute-0 sudo[161650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmrkdjnezmncmwhjbvaozxfzywgkmoim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161177.286294-116-262708970994646/AnsiballZ_file.py'
Nov 26 12:46:17 compute-0 sudo[161650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:17 compute-0 python3.9[161652]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:46:17 compute-0 sudo[161650]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:17 compute-0 sudo[161802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgqwwlmhvrtujirtuhwhjhpgehslqajm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161177.7041714-116-207850772105722/AnsiballZ_file.py'
Nov 26 12:46:17 compute-0 sudo[161802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:18 compute-0 python3.9[161804]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:46:18 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:18 compute-0 sudo[161802]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:18 compute-0 sudo[161954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fegbjhzhikwazkhdndyyagaypovzojdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161178.1189137-116-257770313291095/AnsiballZ_file.py'
Nov 26 12:46:18 compute-0 sudo[161954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:18 compute-0 python3.9[161956]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:46:18 compute-0 sudo[161954]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:18 compute-0 sudo[162106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wifkqxgohijlbmkbisiqvktsqvjfvzrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161178.5321038-116-249036541460208/AnsiballZ_file.py'
Nov 26 12:46:18 compute-0 sudo[162106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:18 compute-0 python3.9[162108]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:46:18 compute-0 sudo[162106]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:19 compute-0 sudo[162258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zowzbsmmgbmqdqjmgchmvouctmxhtvdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161179.0066366-166-72480133890768/AnsiballZ_file.py'
Nov 26 12:46:19 compute-0 sudo[162258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:19 compute-0 ceph-mon[74966]: pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:19 compute-0 python3.9[162260]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:46:19 compute-0 sudo[162258]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:19 compute-0 sudo[162410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mldlgvknsgqtpqjowoyxdxkpwznoytyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161179.4477918-166-265741112946603/AnsiballZ_file.py'
Nov 26 12:46:19 compute-0 sudo[162410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:19 compute-0 python3.9[162412]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:46:19 compute-0 sudo[162410]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:20 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:20 compute-0 sudo[162562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwacvcphidwzopkkyadsrsdgpjevvwqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161179.866134-166-76208264836188/AnsiballZ_file.py'
Nov 26 12:46:20 compute-0 sudo[162562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:20 compute-0 python3.9[162564]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:46:20 compute-0 sudo[162562]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:20 compute-0 sudo[162714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjxhflzwvymbpjcyinbdbmwrhoezzldk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161180.4017992-166-90025242911634/AnsiballZ_file.py'
Nov 26 12:46:20 compute-0 sudo[162714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:20 compute-0 python3.9[162716]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:46:20 compute-0 sudo[162714]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:46:21 compute-0 sudo[162866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eogrddsykupnxqmwxkarairvzpjryryh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161180.8404973-166-151496894328217/AnsiballZ_file.py'
Nov 26 12:46:21 compute-0 sudo[162866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:21 compute-0 python3.9[162868]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:46:21 compute-0 sudo[162866]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:21 compute-0 ceph-mon[74966]: pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:21 compute-0 sudo[163018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmbtrnnzryafdwabcnwwpenxfgbnkkvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161181.2869263-166-6093952414662/AnsiballZ_file.py'
Nov 26 12:46:21 compute-0 sudo[163018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:21 compute-0 python3.9[163020]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:46:21 compute-0 sudo[163018]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:21 compute-0 sudo[163170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwtbxdmjfacmeazoboqbrqtjrlulzvvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161181.7215426-166-46368818076710/AnsiballZ_file.py'
Nov 26 12:46:21 compute-0 sudo[163170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:22 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:22 compute-0 python3.9[163172]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:46:22 compute-0 sudo[163170]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:22 compute-0 sudo[163322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcbtajxncmexlcjjiseufqagzduigrhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161182.2486944-217-235965673886994/AnsiballZ_command.py'
Nov 26 12:46:22 compute-0 sudo[163322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:22 compute-0 python3.9[163324]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:46:22 compute-0 sudo[163322]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:23 compute-0 python3.9[163476]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 12:46:23 compute-0 ceph-mon[74966]: pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:23 compute-0 sudo[163626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okgwlxqwkxfhozpcskcuddehlnvoltge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161183.3638077-235-247198067759120/AnsiballZ_systemd_service.py'
Nov 26 12:46:23 compute-0 sudo[163626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:23 compute-0 python3.9[163628]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 12:46:23 compute-0 systemd[1]: Reloading.
Nov 26 12:46:23 compute-0 systemd-rc-local-generator[163649]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:46:23 compute-0 systemd-sysv-generator[163652]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:46:24 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:24 compute-0 sudo[163626]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:24 compute-0 sudo[163813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lndpfwgrryehphfcrtlxjtgjtbsddkqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161184.1530943-243-189732263241455/AnsiballZ_command.py'
Nov 26 12:46:24 compute-0 sudo[163813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:24 compute-0 python3.9[163815]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:46:24 compute-0 sudo[163813]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:24 compute-0 sudo[163966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eextwmjfqsklwzmsokuoxggfjckqukso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161184.5906157-243-224414361740833/AnsiballZ_command.py'
Nov 26 12:46:24 compute-0 sudo[163966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:24 compute-0 python3.9[163968]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:46:24 compute-0 sudo[163966]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:25 compute-0 sudo[164119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzbcmpcsqcnuvsvbfonfpejpmipqthky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161185.0210502-243-58446827948456/AnsiballZ_command.py'
Nov 26 12:46:25 compute-0 sudo[164119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:25 compute-0 ceph-mon[74966]: pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:25 compute-0 python3.9[164121]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:46:25 compute-0 sudo[164119]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:25 compute-0 sudo[164272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysciwyxczblazljhaqwhaxtuujubgwtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161185.4557118-243-106894465672557/AnsiballZ_command.py'
Nov 26 12:46:25 compute-0 sudo[164272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:25 compute-0 python3.9[164274]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:46:25 compute-0 sudo[164272]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.994652) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161185994680, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1461, "num_deletes": 250, "total_data_size": 2297655, "memory_usage": 2326888, "flush_reason": "Manual Compaction"}
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161185998485, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1318929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7385, "largest_seqno": 8845, "table_properties": {"data_size": 1314037, "index_size": 2224, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12650, "raw_average_key_size": 19, "raw_value_size": 1303089, "raw_average_value_size": 2039, "num_data_blocks": 106, "num_entries": 639, "num_filter_entries": 639, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764161030, "oldest_key_time": 1764161030, "file_creation_time": 1764161185, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 3857 microseconds, and 2889 cpu microseconds.
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.998512) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1318929 bytes OK
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.998522) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.998813) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.998823) EVENT_LOG_v1 {"time_micros": 1764161185998820, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.998833) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2291234, prev total WAL file size 2291234, number of live WAL files 2.
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.999333) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1288KB)], [20(7417KB)]
Nov 26 12:46:25 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161185999416, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8914114, "oldest_snapshot_seqno": -1}
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3322 keys, 6858498 bytes, temperature: kUnknown
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161186013960, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6858498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6833231, "index_size": 15878, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 79678, "raw_average_key_size": 23, "raw_value_size": 6770109, "raw_average_value_size": 2037, "num_data_blocks": 705, "num_entries": 3322, "num_filter_entries": 3322, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160613, "oldest_key_time": 0, "file_creation_time": 1764161185, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.014090) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6858498 bytes
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.014426) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 612.1 rd, 470.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.2 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(12.0) write-amplify(5.2) OK, records in: 3762, records dropped: 440 output_compression: NoCompression
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.014443) EVENT_LOG_v1 {"time_micros": 1764161186014436, "job": 6, "event": "compaction_finished", "compaction_time_micros": 14564, "compaction_time_cpu_micros": 11592, "output_level": 6, "num_output_files": 1, "total_output_size": 6858498, "num_input_records": 3762, "num_output_records": 3322, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161186014686, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161186015666, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:25.999233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.015701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.015704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.015705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.015707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:46:26 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:46:26.015708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:46:26 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:26 compute-0 sudo[164425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gybamrmjsbhscfsdgycrjrxnzfskglry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161185.8820667-243-136961030304532/AnsiballZ_command.py'
Nov 26 12:46:26 compute-0 sudo[164425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:26 compute-0 python3.9[164427]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:46:26 compute-0 sudo[164425]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:26 compute-0 sudo[164578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nntlwugyhknhhewttcghsllckxkykbeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161186.3377154-243-67266840269677/AnsiballZ_command.py'
Nov 26 12:46:26 compute-0 sudo[164578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:26 compute-0 python3.9[164580]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:46:26 compute-0 sudo[164578]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:26 compute-0 sudo[164731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grbilalugsitbdyembvldxlpgnbhguna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161186.779649-243-184782335436916/AnsiballZ_command.py'
Nov 26 12:46:26 compute-0 sudo[164731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:26 compute-0 ceph-mon[74966]: pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:27 compute-0 python3.9[164733]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:46:27 compute-0 sudo[164731]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:27 compute-0 sudo[164884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nprqppsnuxxalabbnvwurwwzgwdlplrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161187.4150984-297-80707939771396/AnsiballZ_getent.py'
Nov 26 12:46:27 compute-0 sudo[164884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:27 compute-0 python3.9[164886]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 26 12:46:27 compute-0 sudo[164884]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:28 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:28 compute-0 sudo[165037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cikjsfrijbeypvbqsgzlkgjtyanfadws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161187.9828281-305-127561887294946/AnsiballZ_group.py'
Nov 26 12:46:28 compute-0 sudo[165037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:28 compute-0 python3.9[165039]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 12:46:28 compute-0 groupadd[165040]: group added to /etc/group: name=libvirt, GID=42473
Nov 26 12:46:28 compute-0 groupadd[165040]: group added to /etc/gshadow: name=libvirt
Nov 26 12:46:28 compute-0 groupadd[165040]: new group: name=libvirt, GID=42473
Nov 26 12:46:28 compute-0 sudo[165037]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:28 compute-0 sudo[165195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxwzesyeeyqaktmvesmdqdftctrfncty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161188.6073577-313-186264112979416/AnsiballZ_user.py'
Nov 26 12:46:28 compute-0 sudo[165195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:29 compute-0 ceph-mon[74966]: pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:29 compute-0 python3.9[165197]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 12:46:29 compute-0 useradd[165199]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 26 12:46:29 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 12:46:29 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 12:46:29 compute-0 sudo[165195]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:29 compute-0 sudo[165356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gejtrxhnhcjxfdvmhcxnlhafrdpruvkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161189.434275-324-98806092606460/AnsiballZ_setup.py'
Nov 26 12:46:29 compute-0 sudo[165356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:29 compute-0 python3.9[165358]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:46:30 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:30 compute-0 sudo[165356]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:30 compute-0 sudo[165449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzjrzzaqnjifskhcgzrdhbseigiuqjwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161189.434275-324-98806092606460/AnsiballZ_dnf.py'
Nov 26 12:46:30 compute-0 sudo[165449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:46:30 compute-0 podman[165414]: 2025-11-26 12:46:30.349267235 +0000 UTC m=+0.040942727 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 12:46:30 compute-0 python3.9[165459]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:46:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:46:31 compute-0 ceph-mon[74966]: pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:32 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:33 compute-0 ceph-mon[74966]: pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:34 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:35 compute-0 ceph-mon[74966]: pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:46:35
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'default.rgw.control', 'volumes', 'images', 'cephfs.cephfs.meta', 'backups']
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:46:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:46:35 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:46:36 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:37 compute-0 ceph-mon[74966]: pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:38 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:39 compute-0 ceph-mon[74966]: pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:40 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:40 compute-0 podman[165644]: 2025-11-26 12:46:40.903625108 +0000 UTC m=+0.066935215 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 26 12:46:40 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:46:41 compute-0 ceph-mon[74966]: pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:42 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:43 compute-0 ceph-mon[74966]: pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:43 compute-0 sudo[165674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:46:43 compute-0 sudo[165674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:43 compute-0 sudo[165674]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:43 compute-0 sudo[165699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:46:43 compute-0 sudo[165699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:43 compute-0 sudo[165699]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:43 compute-0 sudo[165724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:46:43 compute-0 sudo[165724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:43 compute-0 sudo[165724]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:43 compute-0 sudo[165749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:46:43 compute-0 sudo[165749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:44 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:44 compute-0 sudo[165749]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:44 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:46:44 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:46:44 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:46:44 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:46:44 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:46:44 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:46:44 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:46:44 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:46:44 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 65762dfa-a4b1-4b93-ad1a-9b7c872f2b65 does not exist
Nov 26 12:46:44 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 82127588-d9e2-4329-b4ef-ed610a1a083d does not exist
Nov 26 12:46:44 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 67bdc3bb-6ca8-4120-a4c2-50bb68709fde does not exist
Nov 26 12:46:44 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:46:44 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:46:44 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:46:44 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:46:44 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:46:44 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:46:44 compute-0 sudo[165803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:46:44 compute-0 sudo[165803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:44 compute-0 sudo[165803]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:44 compute-0 sudo[165828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:46:44 compute-0 sudo[165828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:44 compute-0 sudo[165828]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:44 compute-0 sudo[165853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:46:44 compute-0 sudo[165853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:44 compute-0 sudo[165853]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:44 compute-0 sudo[165878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:46:44 compute-0 sudo[165878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:44 compute-0 podman[165934]: 2025-11-26 12:46:44.519721531 +0000 UTC m=+0.031419017 container create 0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 12:46:44 compute-0 systemd[1]: Started libpod-conmon-0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449.scope.
Nov 26 12:46:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:46:44 compute-0 podman[165934]: 2025-11-26 12:46:44.573937192 +0000 UTC m=+0.085634688 container init 0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 12:46:44 compute-0 podman[165934]: 2025-11-26 12:46:44.57851947 +0000 UTC m=+0.090216965 container start 0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_volhard, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:46:44 compute-0 podman[165934]: 2025-11-26 12:46:44.579607941 +0000 UTC m=+0.091305436 container attach 0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_volhard, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:46:44 compute-0 musing_volhard[165948]: 167 167
Nov 26 12:46:44 compute-0 systemd[1]: libpod-0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449.scope: Deactivated successfully.
Nov 26 12:46:44 compute-0 podman[165934]: 2025-11-26 12:46:44.586449546 +0000 UTC m=+0.098147040 container died 0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 12:46:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-387896585ec97bef6667ecb61c6c40e2ec793b841ecacbdbb3cfb86bab3252a1-merged.mount: Deactivated successfully.
Nov 26 12:46:44 compute-0 podman[165934]: 2025-11-26 12:46:44.507871177 +0000 UTC m=+0.019568692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:46:44 compute-0 podman[165934]: 2025-11-26 12:46:44.60902301 +0000 UTC m=+0.120720504 container remove 0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 12:46:44 compute-0 systemd[1]: libpod-conmon-0648ed0b3aca497afb5c4840d1d5f548b5526262e7596131116bb4b854597449.scope: Deactivated successfully.
Nov 26 12:46:44 compute-0 podman[165969]: 2025-11-26 12:46:44.724804535 +0000 UTC m=+0.026806373 container create a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_brahmagupta, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 12:46:44 compute-0 systemd[1]: Started libpod-conmon-a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4.scope.
Nov 26 12:46:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f3e65d06183cd9bf83db731902574003076eb5071db13676452600f2467c16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f3e65d06183cd9bf83db731902574003076eb5071db13676452600f2467c16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f3e65d06183cd9bf83db731902574003076eb5071db13676452600f2467c16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f3e65d06183cd9bf83db731902574003076eb5071db13676452600f2467c16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f3e65d06183cd9bf83db731902574003076eb5071db13676452600f2467c16/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:46:44 compute-0 podman[165969]: 2025-11-26 12:46:44.779574922 +0000 UTC m=+0.081576769 container init a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 12:46:44 compute-0 podman[165969]: 2025-11-26 12:46:44.785158676 +0000 UTC m=+0.087160513 container start a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_brahmagupta, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:46:44 compute-0 podman[165969]: 2025-11-26 12:46:44.786211581 +0000 UTC m=+0.088213418 container attach a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 12:46:44 compute-0 podman[165969]: 2025-11-26 12:46:44.714394656 +0000 UTC m=+0.016396513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:46:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:46:45 compute-0 ceph-mon[74966]: pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:46:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:46:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:46:45 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:46:45 compute-0 lucid_brahmagupta[165982]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:46:45 compute-0 lucid_brahmagupta[165982]: --> relative data size: 1.0
Nov 26 12:46:45 compute-0 lucid_brahmagupta[165982]: --> All data devices are unavailable
Nov 26 12:46:45 compute-0 systemd[1]: libpod-a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4.scope: Deactivated successfully.
Nov 26 12:46:45 compute-0 podman[165969]: 2025-11-26 12:46:45.597111278 +0000 UTC m=+0.899113115 container died a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_brahmagupta, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:46:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-50f3e65d06183cd9bf83db731902574003076eb5071db13676452600f2467c16-merged.mount: Deactivated successfully.
Nov 26 12:46:45 compute-0 podman[165969]: 2025-11-26 12:46:45.632871025 +0000 UTC m=+0.934872852 container remove a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:46:45 compute-0 systemd[1]: libpod-conmon-a4971b8076dd1630cebacb204755acf69b1bc5a0c274565bba7dbc5457b9e7a4.scope: Deactivated successfully.
Nov 26 12:46:45 compute-0 sudo[165878]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:45 compute-0 sudo[166021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:46:45 compute-0 sudo[166021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:45 compute-0 sudo[166021]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:45 compute-0 sudo[166046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:46:45 compute-0 sudo[166046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:45 compute-0 sudo[166046]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:45 compute-0 sudo[166071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:46:45 compute-0 sudo[166071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:45 compute-0 sudo[166071]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:45 compute-0 sudo[166096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:46:45 compute-0 sudo[166096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:45 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:46:46 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:46 compute-0 podman[166153]: 2025-11-26 12:46:46.056581427 +0000 UTC m=+0.032128154 container create 78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_snyder, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:46:46 compute-0 systemd[1]: Started libpod-conmon-78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288.scope.
Nov 26 12:46:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:46:46 compute-0 podman[166153]: 2025-11-26 12:46:46.106187251 +0000 UTC m=+0.081733989 container init 78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_snyder, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 12:46:46 compute-0 podman[166153]: 2025-11-26 12:46:46.110496385 +0000 UTC m=+0.086043113 container start 78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:46:46 compute-0 podman[166153]: 2025-11-26 12:46:46.111854611 +0000 UTC m=+0.087401340 container attach 78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_snyder, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:46:46 compute-0 admiring_snyder[166166]: 167 167
Nov 26 12:46:46 compute-0 systemd[1]: libpod-78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288.scope: Deactivated successfully.
Nov 26 12:46:46 compute-0 podman[166153]: 2025-11-26 12:46:46.114104648 +0000 UTC m=+0.089651377 container died 78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 12:46:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-def5b94c6ba9067172ee607c299ab0b35a062ae61f8ada3aeef13de7c7c3d049-merged.mount: Deactivated successfully.
Nov 26 12:46:46 compute-0 podman[166153]: 2025-11-26 12:46:46.132955102 +0000 UTC m=+0.108501830 container remove 78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:46:46 compute-0 podman[166153]: 2025-11-26 12:46:46.040992423 +0000 UTC m=+0.016539172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:46:46 compute-0 systemd[1]: libpod-conmon-78645b97e33d277121db01264187f99bcc7649352c6ff412018be9c75d208288.scope: Deactivated successfully.
Nov 26 12:46:46 compute-0 podman[166188]: 2025-11-26 12:46:46.247687715 +0000 UTC m=+0.025582521 container create 344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wiles, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:46:46 compute-0 systemd[1]: Started libpod-conmon-344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f.scope.
Nov 26 12:46:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06be778efa5a165f4c4efb6aeea6dd6fa3f86545a430c72d2db0e008ae49d7a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06be778efa5a165f4c4efb6aeea6dd6fa3f86545a430c72d2db0e008ae49d7a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06be778efa5a165f4c4efb6aeea6dd6fa3f86545a430c72d2db0e008ae49d7a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06be778efa5a165f4c4efb6aeea6dd6fa3f86545a430c72d2db0e008ae49d7a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:46:46 compute-0 podman[166188]: 2025-11-26 12:46:46.306834002 +0000 UTC m=+0.084728808 container init 344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 12:46:46 compute-0 podman[166188]: 2025-11-26 12:46:46.311890483 +0000 UTC m=+0.089785290 container start 344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wiles, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 12:46:46 compute-0 podman[166188]: 2025-11-26 12:46:46.312950689 +0000 UTC m=+0.090845495 container attach 344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 12:46:46 compute-0 podman[166188]: 2025-11-26 12:46:46.237714696 +0000 UTC m=+0.015609522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]: {
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:     "0": [
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:         {
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "devices": [
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "/dev/loop3"
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             ],
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "lv_name": "ceph_lv0",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "lv_size": "21470642176",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "name": "ceph_lv0",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "tags": {
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.cluster_name": "ceph",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.crush_device_class": "",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.encrypted": "0",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.osd_id": "0",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.type": "block",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.vdo": "0"
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             },
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "type": "block",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "vg_name": "ceph_vg0"
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:         }
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:     ],
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:     "1": [
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:         {
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "devices": [
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "/dev/loop4"
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             ],
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "lv_name": "ceph_lv1",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "lv_size": "21470642176",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "name": "ceph_lv1",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "tags": {
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.cluster_name": "ceph",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.crush_device_class": "",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.encrypted": "0",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.osd_id": "1",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.type": "block",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.vdo": "0"
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             },
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "type": "block",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "vg_name": "ceph_vg1"
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:         }
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:     ],
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:     "2": [
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:         {
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "devices": [
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "/dev/loop5"
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             ],
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "lv_name": "ceph_lv2",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "lv_size": "21470642176",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "name": "ceph_lv2",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "tags": {
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.cluster_name": "ceph",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.crush_device_class": "",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.encrypted": "0",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.osd_id": "2",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.type": "block",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:                 "ceph.vdo": "0"
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             },
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "type": "block",
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:             "vg_name": "ceph_vg2"
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:         }
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]:     ]
Nov 26 12:46:46 compute-0 peaceful_wiles[166202]: }
Nov 26 12:46:46 compute-0 systemd[1]: libpod-344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f.scope: Deactivated successfully.
Nov 26 12:46:46 compute-0 podman[166211]: 2025-11-26 12:46:46.965918594 +0000 UTC m=+0.016859145 container died 344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wiles, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 12:46:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-06be778efa5a165f4c4efb6aeea6dd6fa3f86545a430c72d2db0e008ae49d7a5-merged.mount: Deactivated successfully.
Nov 26 12:46:46 compute-0 podman[166211]: 2025-11-26 12:46:46.994667857 +0000 UTC m=+0.045608399 container remove 344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wiles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 12:46:46 compute-0 systemd[1]: libpod-conmon-344faed5c9920e0d712e3acdfb2e926650ae9bb588d3f46860a5e442ffa3094f.scope: Deactivated successfully.
Nov 26 12:46:47 compute-0 sudo[166096]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:47 compute-0 sudo[166223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:46:47 compute-0 sudo[166223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:47 compute-0 sudo[166223]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:47 compute-0 sudo[166248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:46:47 compute-0 sudo[166248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:47 compute-0 sudo[166248]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:47 compute-0 ceph-mon[74966]: pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:47 compute-0 sudo[166273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:46:47 compute-0 sudo[166273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:47 compute-0 sudo[166273]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:47 compute-0 sudo[166298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:46:47 compute-0 sudo[166298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:47 compute-0 podman[166354]: 2025-11-26 12:46:47.423777686 +0000 UTC m=+0.032183500 container create a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_benz, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:46:47 compute-0 systemd[1]: Started libpod-conmon-a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9.scope.
Nov 26 12:46:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:46:47 compute-0 podman[166354]: 2025-11-26 12:46:47.477470815 +0000 UTC m=+0.085876629 container init a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_benz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 12:46:47 compute-0 podman[166354]: 2025-11-26 12:46:47.48181797 +0000 UTC m=+0.090223774 container start a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:46:47 compute-0 podman[166354]: 2025-11-26 12:46:47.483095245 +0000 UTC m=+0.091501069 container attach a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_benz, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 12:46:47 compute-0 thirsty_benz[166368]: 167 167
Nov 26 12:46:47 compute-0 systemd[1]: libpod-a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9.scope: Deactivated successfully.
Nov 26 12:46:47 compute-0 conmon[166368]: conmon a4e6ed90b27531bcf60c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9.scope/container/memory.events
Nov 26 12:46:47 compute-0 podman[166354]: 2025-11-26 12:46:47.486991512 +0000 UTC m=+0.095397316 container died a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:46:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f069efd7cf4d45e1b49232e8f02512f46c4f6a8de7676088a506425e1839254-merged.mount: Deactivated successfully.
Nov 26 12:46:47 compute-0 podman[166354]: 2025-11-26 12:46:47.412948846 +0000 UTC m=+0.021354650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:46:47 compute-0 podman[166354]: 2025-11-26 12:46:47.513153724 +0000 UTC m=+0.121559528 container remove a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 12:46:47 compute-0 systemd[1]: libpod-conmon-a4e6ed90b27531bcf60c6de50d06309969bb6441c47114810da89ec73c6e71d9.scope: Deactivated successfully.
Nov 26 12:46:47 compute-0 podman[166391]: 2025-11-26 12:46:47.631093724 +0000 UTC m=+0.026795994 container create f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 12:46:47 compute-0 systemd[1]: Started libpod-conmon-f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d.scope.
Nov 26 12:46:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f0e4377a8c8aa52baec47bdc83f9bbbf831410164b9e40fa2f57a772c829c7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f0e4377a8c8aa52baec47bdc83f9bbbf831410164b9e40fa2f57a772c829c7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f0e4377a8c8aa52baec47bdc83f9bbbf831410164b9e40fa2f57a772c829c7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f0e4377a8c8aa52baec47bdc83f9bbbf831410164b9e40fa2f57a772c829c7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:46:47 compute-0 podman[166391]: 2025-11-26 12:46:47.688389348 +0000 UTC m=+0.084091607 container init f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:46:47 compute-0 podman[166391]: 2025-11-26 12:46:47.69405229 +0000 UTC m=+0.089754550 container start f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:46:47 compute-0 podman[166391]: 2025-11-26 12:46:47.695107157 +0000 UTC m=+0.090809417 container attach f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:46:47 compute-0 podman[166391]: 2025-11-26 12:46:47.62053317 +0000 UTC m=+0.016235451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:46:48 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:48 compute-0 agitated_raman[166405]: {
Nov 26 12:46:48 compute-0 agitated_raman[166405]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:46:48 compute-0 agitated_raman[166405]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:46:48 compute-0 agitated_raman[166405]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:46:48 compute-0 agitated_raman[166405]:         "osd_id": 1,
Nov 26 12:46:48 compute-0 agitated_raman[166405]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:46:48 compute-0 agitated_raman[166405]:         "type": "bluestore"
Nov 26 12:46:48 compute-0 agitated_raman[166405]:     },
Nov 26 12:46:48 compute-0 agitated_raman[166405]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:46:48 compute-0 agitated_raman[166405]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:46:48 compute-0 agitated_raman[166405]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:46:48 compute-0 agitated_raman[166405]:         "osd_id": 2,
Nov 26 12:46:48 compute-0 agitated_raman[166405]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:46:48 compute-0 agitated_raman[166405]:         "type": "bluestore"
Nov 26 12:46:48 compute-0 agitated_raman[166405]:     },
Nov 26 12:46:48 compute-0 agitated_raman[166405]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:46:48 compute-0 agitated_raman[166405]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:46:48 compute-0 agitated_raman[166405]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:46:48 compute-0 agitated_raman[166405]:         "osd_id": 0,
Nov 26 12:46:48 compute-0 agitated_raman[166405]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:46:48 compute-0 agitated_raman[166405]:         "type": "bluestore"
Nov 26 12:46:48 compute-0 agitated_raman[166405]:     }
Nov 26 12:46:48 compute-0 agitated_raman[166405]: }
Nov 26 12:46:48 compute-0 systemd[1]: libpod-f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d.scope: Deactivated successfully.
Nov 26 12:46:48 compute-0 conmon[166405]: conmon f232e78971337a306b01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d.scope/container/memory.events
Nov 26 12:46:48 compute-0 podman[166391]: 2025-11-26 12:46:48.459521897 +0000 UTC m=+0.855224158 container died f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 12:46:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f0e4377a8c8aa52baec47bdc83f9bbbf831410164b9e40fa2f57a772c829c7b-merged.mount: Deactivated successfully.
Nov 26 12:46:48 compute-0 podman[166391]: 2025-11-26 12:46:48.490237242 +0000 UTC m=+0.885939501 container remove f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 12:46:48 compute-0 systemd[1]: libpod-conmon-f232e78971337a306b01f743dc488f89d73f6b97debeb23497cc1eb5973eb10d.scope: Deactivated successfully.
Nov 26 12:46:48 compute-0 sudo[166298]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:48 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:46:48 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:46:48 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:46:48 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:46:48 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 9a5269a8-f8e8-41a4-87c0-b48cbae4830d does not exist
Nov 26 12:46:48 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev d05bc1dc-cdfd-4109-8e13-17dca0c39cfe does not exist
Nov 26 12:46:48 compute-0 sudo[166448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:46:48 compute-0 sudo[166448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:48 compute-0 sudo[166448]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:48 compute-0 sudo[166473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:46:48 compute-0 sudo[166473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:46:48 compute-0 sudo[166473]: pam_unix(sudo:session): session closed for user root
Nov 26 12:46:49 compute-0 ceph-mon[74966]: pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:49 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:46:49 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:46:49 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Nov 26 12:46:49 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 12:46:49 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 26 12:46:49 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 12:46:49 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 26 12:46:49 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 12:46:49 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 12:46:49 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 12:46:50 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:46:51 compute-0 ceph-mon[74966]: pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:52 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:53 compute-0 ceph-mon[74966]: pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:54 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:55 compute-0 ceph-mon[74966]: pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:55 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 12:46:55 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2014 writes, 8959 keys, 2014 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2014 writes, 2014 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2014 writes, 8959 keys, 2014 commit groups, 1.0 writes per commit group, ingest: 11.64 MB, 0.02 MB/s
                                           Interval WAL: 2014 writes, 2014 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    436.4      0.02              0.01         3    0.007       0      0       0.0       0.0
                                             L6      1/0    6.54 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    526.7    459.9      0.03              0.02         2    0.015    7174    729       0.0       0.0
                                            Sum      1/0    6.54 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    318.6    450.6      0.05              0.04         5    0.010    7174    729       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    325.5    459.3      0.05              0.04         4    0.012    7174    729       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    526.7    459.9      0.03              0.02         2    0.015    7174    729       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    458.4      0.02              0.01         2    0.009       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     48.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.0 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560bd0e9b1f0#2 capacity: 308.00 MB usage: 566.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(35,478.89 KB,0.15184%) FilterBlock(6,28.30 KB,0.00897197%) IndexBlock(6,58.91 KB,0.0186772%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 26 12:46:55 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:46:56 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:57 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Nov 26 12:46:57 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 12:46:57 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 26 12:46:57 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 12:46:57 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 26 12:46:57 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 12:46:57 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 12:46:57 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 12:46:57 compute-0 ceph-mon[74966]: pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:58 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:46:59 compute-0 ceph-mon[74966]: pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:00 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:00 compute-0 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 26 12:47:00 compute-0 podman[166513]: 2025-11-26 12:47:00.876268566 +0000 UTC m=+0.042591145 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 26 12:47:00 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:47:01 compute-0 ceph-mon[74966]: pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:47:01.720 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:47:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:47:01.721 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:47:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:47:01.721 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:47:02 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:03 compute-0 ceph-mon[74966]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:04 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:05 compute-0 ceph-mon[74966]: pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:47:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:47:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:47:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:47:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:47:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:47:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:47:06 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:07 compute-0 ceph-mon[74966]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:08 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:09 compute-0 ceph-mon[74966]: pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:10 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:47:11 compute-0 ceph-mon[74966]: pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:11 compute-0 podman[171613]: 2025-11-26 12:47:11.884489345 +0000 UTC m=+0.056480670 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 12:47:12 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:13 compute-0 ceph-mon[74966]: pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:14 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:15 compute-0 ceph-mon[74966]: pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:47:16 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:17 compute-0 ceph-mon[74966]: pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:18 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:19 compute-0 ceph-mon[74966]: pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:20 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:47:21 compute-0 ceph-mon[74966]: pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:22 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:23 compute-0 ceph-mon[74966]: pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:24 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:25 compute-0 ceph-mon[74966]: pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:47:26 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:27 compute-0 ceph-mon[74966]: pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:28 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:29 compute-0 ceph-mon[74966]: pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:30 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:47:31 compute-0 ceph-mon[74966]: pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:31 compute-0 podman[183350]: 2025-11-26 12:47:31.873675376 +0000 UTC m=+0.042870723 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:47:32 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:33 compute-0 ceph-mon[74966]: pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:34 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:35 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Nov 26 12:47:35 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 12:47:35 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 26 12:47:35 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 12:47:35 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 26 12:47:35 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 12:47:35 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 12:47:35 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 12:47:35 compute-0 ceph-mon[74966]: pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:35 compute-0 groupadd[183379]: group added to /etc/group: name=dnsmasq, GID=991
Nov 26 12:47:35 compute-0 groupadd[183379]: group added to /etc/gshadow: name=dnsmasq
Nov 26 12:47:35 compute-0 groupadd[183379]: new group: name=dnsmasq, GID=991
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:47:35
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['default.rgw.control', 'images', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.meta']
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:47:35 compute-0 useradd[183386]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:47:35 compute-0 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Nov 26 12:47:35 compute-0 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:47:35 compute-0 dbus-broker-launch[766]: Noticed file-system modification, trigger reload.
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:47:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:47:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:47:36 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:36 compute-0 groupadd[183399]: group added to /etc/group: name=clevis, GID=990
Nov 26 12:47:36 compute-0 groupadd[183399]: group added to /etc/gshadow: name=clevis
Nov 26 12:47:36 compute-0 groupadd[183399]: new group: name=clevis, GID=990
Nov 26 12:47:36 compute-0 useradd[183406]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 26 12:47:36 compute-0 usermod[183416]: add 'clevis' to group 'tss'
Nov 26 12:47:36 compute-0 usermod[183416]: add 'clevis' to shadow group 'tss'
Nov 26 12:47:37 compute-0 ceph-mon[74966]: pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:38 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:38 compute-0 polkitd[43512]: Reloading rules
Nov 26 12:47:38 compute-0 polkitd[43512]: Collecting garbage unconditionally...
Nov 26 12:47:38 compute-0 polkitd[43512]: Loading rules from directory /etc/polkit-1/rules.d
Nov 26 12:47:38 compute-0 polkitd[43512]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 26 12:47:38 compute-0 polkitd[43512]: Finished loading, compiling and executing 3 rules
Nov 26 12:47:38 compute-0 polkitd[43512]: Reloading rules
Nov 26 12:47:38 compute-0 polkitd[43512]: Collecting garbage unconditionally...
Nov 26 12:47:38 compute-0 polkitd[43512]: Loading rules from directory /etc/polkit-1/rules.d
Nov 26 12:47:38 compute-0 polkitd[43512]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 26 12:47:38 compute-0 polkitd[43512]: Finished loading, compiling and executing 3 rules
Nov 26 12:47:39 compute-0 groupadd[183603]: group added to /etc/group: name=ceph, GID=167
Nov 26 12:47:39 compute-0 groupadd[183603]: group added to /etc/gshadow: name=ceph
Nov 26 12:47:39 compute-0 groupadd[183603]: new group: name=ceph, GID=167
Nov 26 12:47:39 compute-0 useradd[183609]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 26 12:47:39 compute-0 ceph-mon[74966]: pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:40 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:47:41 compute-0 ceph-mon[74966]: pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:41 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 26 12:47:41 compute-0 sshd[963]: Received signal 15; terminating.
Nov 26 12:47:41 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 26 12:47:41 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 26 12:47:41 compute-0 systemd[1]: sshd.service: Consumed 1.402s CPU time, read 32.0K from disk, written 0B to disk.
Nov 26 12:47:41 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 26 12:47:41 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 26 12:47:41 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 12:47:41 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 12:47:41 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 12:47:41 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 26 12:47:41 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 26 12:47:41 compute-0 sshd[184234]: Server listening on 0.0.0.0 port 22.
Nov 26 12:47:41 compute-0 sshd[184234]: Server listening on :: port 22.
Nov 26 12:47:41 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 26 12:47:42 compute-0 podman[184268]: 2025-11-26 12:47:42.011846491 +0000 UTC m=+0.090976261 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 26 12:47:42 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.305145) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161262305227, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 832, "num_deletes": 251, "total_data_size": 1147783, "memory_usage": 1165120, "flush_reason": "Manual Compaction"}
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161262311374, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1137594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8846, "largest_seqno": 9677, "table_properties": {"data_size": 1133396, "index_size": 1914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8803, "raw_average_key_size": 18, "raw_value_size": 1125029, "raw_average_value_size": 2378, "num_data_blocks": 89, "num_entries": 473, "num_filter_entries": 473, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764161186, "oldest_key_time": 1764161186, "file_creation_time": 1764161262, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 6269 microseconds, and 5015 cpu microseconds.
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.311423) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1137594 bytes OK
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.311445) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.312009) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.312021) EVENT_LOG_v1 {"time_micros": 1764161262312018, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.312042) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1143676, prev total WAL file size 1143676, number of live WAL files 2.
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.312493) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1110KB)], [23(6697KB)]
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161262312735, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7996092, "oldest_snapshot_seqno": -1}
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3281 keys, 6224719 bytes, temperature: kUnknown
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161262329342, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6224719, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6200728, "index_size": 14666, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79556, "raw_average_key_size": 24, "raw_value_size": 6139326, "raw_average_value_size": 1871, "num_data_blocks": 641, "num_entries": 3281, "num_filter_entries": 3281, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160613, "oldest_key_time": 0, "file_creation_time": 1764161262, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.329639) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6224719 bytes
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.330364) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 481.5 rd, 374.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 6.5 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(12.5) write-amplify(5.5) OK, records in: 3795, records dropped: 514 output_compression: NoCompression
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.330386) EVENT_LOG_v1 {"time_micros": 1764161262330374, "job": 8, "event": "compaction_finished", "compaction_time_micros": 16606, "compaction_time_cpu_micros": 13516, "output_level": 6, "num_output_files": 1, "total_output_size": 6224719, "num_input_records": 3795, "num_output_records": 3281, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161262330987, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161262332414, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.312407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.332547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.332551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.332553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.332554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:47:42 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:47:42.332556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:47:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 12:47:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 12:47:43 compute-0 systemd[1]: Reloading.
Nov 26 12:47:43 compute-0 ceph-mon[74966]: pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:43 compute-0 systemd-rc-local-generator[184514]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:47:43 compute-0 systemd-sysv-generator[184518]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:47:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 12:47:44 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:47:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:47:45 compute-0 ceph-mon[74966]: pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:45 compute-0 sudo[165449]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:47:46 compute-0 sudo[187738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvkunykdlpkmqljodwytqgnafiladwon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161265.5228124-336-252673556346719/AnsiballZ_systemd.py'
Nov 26 12:47:46 compute-0 sudo[187738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:47:46 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:46 compute-0 python3.9[187766]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 12:47:47 compute-0 ceph-mon[74966]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:47 compute-0 systemd[1]: Reloading.
Nov 26 12:47:47 compute-0 systemd-sysv-generator[189257]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:47:47 compute-0 systemd-rc-local-generator[189250]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:47:47 compute-0 sudo[187738]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:47 compute-0 sudo[189935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmjxobnmoytfjqhwlvhvqvjuunmexxpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161267.7760859-336-261415000122515/AnsiballZ_systemd.py'
Nov 26 12:47:48 compute-0 sudo[189935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:47:48 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:48 compute-0 python3.9[189959]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 12:47:48 compute-0 systemd[1]: Reloading.
Nov 26 12:47:48 compute-0 systemd-rc-local-generator[190402]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:47:48 compute-0 systemd-sysv-generator[190408]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:47:48 compute-0 sudo[189935]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:48 compute-0 sudo[190651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:47:48 compute-0 sudo[190651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:48 compute-0 sudo[190651]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:48 compute-0 sudo[190766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:47:48 compute-0 sudo[190766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:48 compute-0 sudo[190766]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:48 compute-0 sudo[190883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:47:48 compute-0 sudo[190883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:48 compute-0 sudo[190883]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:48 compute-0 sudo[190997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 26 12:47:48 compute-0 sudo[190997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:48 compute-0 sudo[191239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozgpukymrlgborjlwdjmoubxsejavawh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161268.7590332-336-70995061357652/AnsiballZ_systemd.py'
Nov 26 12:47:48 compute-0 sudo[191239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:47:49 compute-0 sudo[190997]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:47:49 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:47:49 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:47:49 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:47:49 compute-0 sudo[191377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:47:49 compute-0 sudo[191377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:49 compute-0 sudo[191377]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:49 compute-0 sudo[191464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:47:49 compute-0 python3.9[191259]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 12:47:49 compute-0 sudo[191464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:49 compute-0 sudo[191464]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:49 compute-0 systemd[1]: Reloading.
Nov 26 12:47:49 compute-0 ceph-mon[74966]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:49 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:47:49 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:47:49 compute-0 systemd-rc-local-generator[191754]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:47:49 compute-0 systemd-sysv-generator[191758]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:47:49 compute-0 sudo[191553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:47:49 compute-0 sudo[191553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:49 compute-0 sudo[191553]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:49 compute-0 sudo[191239]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:49 compute-0 sudo[192058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:47:49 compute-0 sudo[192058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:49 compute-0 sudo[192681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibuykigquivoydbkudvihrjtdflqmame ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161269.7562582-336-243932163875144/AnsiballZ_systemd.py'
Nov 26 12:47:49 compute-0 sudo[192681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:47:50 compute-0 sudo[192058]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:50 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:47:50 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:47:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:47:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:47:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:47:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:47:50 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 98f53cb5-6a97-4e90-bdea-079c62a716f3 does not exist
Nov 26 12:47:50 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev ba583fbc-4e18-4522-b490-2b40a4b3b3c9 does not exist
Nov 26 12:47:50 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev e012392a-f40b-4485-8c89-8151280a9083 does not exist
Nov 26 12:47:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:47:50 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:47:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:47:50 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:47:50 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:47:50 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:47:50 compute-0 sudo[192852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:47:50 compute-0 sudo[192852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:50 compute-0 sudo[192852]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:50 compute-0 sudo[192932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:47:50 compute-0 sudo[192932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:50 compute-0 sudo[192932]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:50 compute-0 python3.9[192706]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 12:47:50 compute-0 sudo[193022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:47:50 compute-0 sudo[193022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:50 compute-0 sudo[193022]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:50 compute-0 systemd[1]: Reloading.
Nov 26 12:47:50 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:47:50 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:47:50 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:47:50 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:47:50 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:47:50 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:47:50 compute-0 systemd-rc-local-generator[193212]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:47:50 compute-0 systemd-sysv-generator[193223]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:47:50 compute-0 sudo[193114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:47:50 compute-0 sudo[193114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:50 compute-0 sudo[192681]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:50 compute-0 podman[193895]: 2025-11-26 12:47:50.88688434 +0000 UTC m=+0.033092552 container create 0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldberg, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:47:50 compute-0 systemd[1]: Started libpod-conmon-0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360.scope.
Nov 26 12:47:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:47:50 compute-0 podman[193895]: 2025-11-26 12:47:50.967983944 +0000 UTC m=+0.114192156 container init 0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:47:50 compute-0 podman[193895]: 2025-11-26 12:47:50.87200299 +0000 UTC m=+0.018211223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:47:50 compute-0 podman[193895]: 2025-11-26 12:47:50.975625563 +0000 UTC m=+0.121833764 container start 0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 12:47:50 compute-0 podman[193895]: 2025-11-26 12:47:50.977895196 +0000 UTC m=+0.124103409 container attach 0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:47:50 compute-0 competent_goldberg[194043]: 167 167
Nov 26 12:47:50 compute-0 systemd[1]: libpod-0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360.scope: Deactivated successfully.
Nov 26 12:47:50 compute-0 conmon[194043]: conmon 0e52e371ebbaa977f695 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360.scope/container/memory.events
Nov 26 12:47:50 compute-0 podman[193895]: 2025-11-26 12:47:50.985962195 +0000 UTC m=+0.132170408 container died 0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 12:47:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f84b50d3563a09edc3b9ffe849ccd27a0f72163b1e6efc30dc28ca36ea3821b-merged.mount: Deactivated successfully.
Nov 26 12:47:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:47:51 compute-0 podman[193895]: 2025-11-26 12:47:51.021179371 +0000 UTC m=+0.167387583 container remove 0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:47:51 compute-0 sudo[194162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thmrzpcbbxotedtpjalosajhkzmcphgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161270.764817-365-212490689444729/AnsiballZ_systemd.py'
Nov 26 12:47:51 compute-0 systemd[1]: libpod-conmon-0e52e371ebbaa977f6953b8635cc62111b60ee5f7c9de324a8ef9ddc103ed360.scope: Deactivated successfully.
Nov 26 12:47:51 compute-0 sudo[194162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:47:51 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 12:47:51 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 12:47:51 compute-0 systemd[1]: man-db-cache-update.service: Consumed 9.454s CPU time.
Nov 26 12:47:51 compute-0 systemd[1]: run-r9038aa58a0774d5692cf4c3984359af4.service: Deactivated successfully.
Nov 26 12:47:51 compute-0 podman[194214]: 2025-11-26 12:47:51.175445026 +0000 UTC m=+0.035005993 container create b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ride, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:47:51 compute-0 systemd[1]: Started libpod-conmon-b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc.scope.
Nov 26 12:47:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3758f63698c7e74aeaaa290efbec3617a8e83da642439f6e9674e28d4d0dfe16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3758f63698c7e74aeaaa290efbec3617a8e83da642439f6e9674e28d4d0dfe16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3758f63698c7e74aeaaa290efbec3617a8e83da642439f6e9674e28d4d0dfe16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3758f63698c7e74aeaaa290efbec3617a8e83da642439f6e9674e28d4d0dfe16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3758f63698c7e74aeaaa290efbec3617a8e83da642439f6e9674e28d4d0dfe16/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:47:51 compute-0 podman[194214]: 2025-11-26 12:47:51.257319044 +0000 UTC m=+0.116880020 container init b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:47:51 compute-0 podman[194214]: 2025-11-26 12:47:51.162168765 +0000 UTC m=+0.021729752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:47:51 compute-0 podman[194214]: 2025-11-26 12:47:51.266603709 +0000 UTC m=+0.126164676 container start b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ride, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 12:47:51 compute-0 podman[194214]: 2025-11-26 12:47:51.268360541 +0000 UTC m=+0.127921528 container attach b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:47:51 compute-0 python3.9[194186]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:47:51 compute-0 ceph-mon[74966]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:51 compute-0 systemd[1]: Reloading.
Nov 26 12:47:51 compute-0 systemd-rc-local-generator[194260]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:47:51 compute-0 systemd-sysv-generator[194266]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:47:51 compute-0 sudo[194162]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:52 compute-0 sudo[194433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgckdptzmaolmdkuqgvtnimumsavejka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161271.805851-365-266332225519846/AnsiballZ_systemd.py'
Nov 26 12:47:52 compute-0 sudo[194433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:47:52 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:52 compute-0 quizzical_ride[194227]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:47:52 compute-0 quizzical_ride[194227]: --> relative data size: 1.0
Nov 26 12:47:52 compute-0 quizzical_ride[194227]: --> All data devices are unavailable
Nov 26 12:47:52 compute-0 systemd[1]: libpod-b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc.scope: Deactivated successfully.
Nov 26 12:47:52 compute-0 podman[194446]: 2025-11-26 12:47:52.229641545 +0000 UTC m=+0.030133426 container died b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:47:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3758f63698c7e74aeaaa290efbec3617a8e83da642439f6e9674e28d4d0dfe16-merged.mount: Deactivated successfully.
Nov 26 12:47:52 compute-0 podman[194446]: 2025-11-26 12:47:52.265478153 +0000 UTC m=+0.065970034 container remove b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:47:52 compute-0 systemd[1]: libpod-conmon-b25e0b8dbe5d78225db13778dd0f02210842df865a7f91c347bf79b37b2384bc.scope: Deactivated successfully.
Nov 26 12:47:52 compute-0 sudo[193114]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:52 compute-0 python3.9[194435]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:47:52 compute-0 sudo[194458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:47:52 compute-0 sudo[194458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:52 compute-0 sudo[194458]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:52 compute-0 systemd[1]: Reloading.
Nov 26 12:47:52 compute-0 systemd-sysv-generator[194533]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:47:52 compute-0 systemd-rc-local-generator[194529]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:47:52 compute-0 sudo[194486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:47:52 compute-0 sudo[194486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:52 compute-0 sudo[194486]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:52 compute-0 sudo[194433]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:52 compute-0 sudo[194546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:47:52 compute-0 sudo[194546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:52 compute-0 sudo[194546]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:52 compute-0 sudo[194594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:47:52 compute-0 sudo[194594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:52 compute-0 sudo[194774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbwyxjnzcntpdfnipcubntakqufjgzuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161272.75707-365-58594441990277/AnsiballZ_systemd.py'
Nov 26 12:47:52 compute-0 sudo[194774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:47:53 compute-0 podman[194781]: 2025-11-26 12:47:53.015586165 +0000 UTC m=+0.034481244 container create 6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ride, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 12:47:53 compute-0 systemd[1]: Started libpod-conmon-6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af.scope.
Nov 26 12:47:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:47:53 compute-0 podman[194781]: 2025-11-26 12:47:53.093569607 +0000 UTC m=+0.112464705 container init 6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 12:47:53 compute-0 podman[194781]: 2025-11-26 12:47:52.999349161 +0000 UTC m=+0.018244260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:47:53 compute-0 podman[194781]: 2025-11-26 12:47:53.100205849 +0000 UTC m=+0.119100917 container start 6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:47:53 compute-0 podman[194781]: 2025-11-26 12:47:53.10230654 +0000 UTC m=+0.121201639 container attach 6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:47:53 compute-0 eager_ride[194795]: 167 167
Nov 26 12:47:53 compute-0 systemd[1]: libpod-6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af.scope: Deactivated successfully.
Nov 26 12:47:53 compute-0 podman[194781]: 2025-11-26 12:47:53.105491405 +0000 UTC m=+0.124386483 container died 6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 12:47:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-88e913ede45ccb30d0cae1267fd8d199c6ff7a0fcd5b019723e71dd27b11c19c-merged.mount: Deactivated successfully.
Nov 26 12:47:53 compute-0 podman[194781]: 2025-11-26 12:47:53.130280753 +0000 UTC m=+0.149175832 container remove 6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ride, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 12:47:53 compute-0 systemd[1]: libpod-conmon-6e074b7e5a315fa289432e34a6bb9302b79434661f190bf1b7e2a6081e8ab7af.scope: Deactivated successfully.
Nov 26 12:47:53 compute-0 python3.9[194779]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:47:53 compute-0 podman[194817]: 2025-11-26 12:47:53.269490342 +0000 UTC m=+0.035146948 container create ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:47:53 compute-0 systemd[1]: Started libpod-conmon-ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682.scope.
Nov 26 12:47:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4981feae16d86dedf11a6a69e50f4ababa7c65e64e69bef4a1183d6c70654b57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4981feae16d86dedf11a6a69e50f4ababa7c65e64e69bef4a1183d6c70654b57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4981feae16d86dedf11a6a69e50f4ababa7c65e64e69bef4a1183d6c70654b57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4981feae16d86dedf11a6a69e50f4ababa7c65e64e69bef4a1183d6c70654b57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:47:53 compute-0 ceph-mon[74966]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:53 compute-0 podman[194817]: 2025-11-26 12:47:53.345105729 +0000 UTC m=+0.110762335 container init ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 12:47:53 compute-0 podman[194817]: 2025-11-26 12:47:53.353260324 +0000 UTC m=+0.118916940 container start ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 12:47:53 compute-0 podman[194817]: 2025-11-26 12:47:53.354472219 +0000 UTC m=+0.120128835 container attach ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:47:53 compute-0 systemd[1]: Reloading.
Nov 26 12:47:53 compute-0 podman[194817]: 2025-11-26 12:47:53.254766621 +0000 UTC m=+0.020423247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:47:53 compute-0 systemd-rc-local-generator[194855]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:47:53 compute-0 systemd-sysv-generator[194861]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:47:53 compute-0 sudo[194774]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:53 compute-0 sudo[195024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpebuuyjyucfywelmbwnnizexnpnzqhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161273.7535126-365-264876686089167/AnsiballZ_systemd.py'
Nov 26 12:47:53 compute-0 sudo[195024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]: {
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:     "0": [
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:         {
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "devices": [
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "/dev/loop3"
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             ],
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "lv_name": "ceph_lv0",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "lv_size": "21470642176",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "name": "ceph_lv0",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "tags": {
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.cluster_name": "ceph",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.crush_device_class": "",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.encrypted": "0",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.osd_id": "0",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.type": "block",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.vdo": "0"
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             },
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "type": "block",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "vg_name": "ceph_vg0"
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:         }
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:     ],
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:     "1": [
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:         {
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "devices": [
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "/dev/loop4"
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             ],
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "lv_name": "ceph_lv1",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "lv_size": "21470642176",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "name": "ceph_lv1",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "tags": {
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.cluster_name": "ceph",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.crush_device_class": "",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.encrypted": "0",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.osd_id": "1",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.type": "block",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.vdo": "0"
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             },
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "type": "block",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "vg_name": "ceph_vg1"
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:         }
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:     ],
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:     "2": [
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:         {
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "devices": [
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "/dev/loop5"
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             ],
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "lv_name": "ceph_lv2",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "lv_size": "21470642176",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "name": "ceph_lv2",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "tags": {
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.cluster_name": "ceph",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.crush_device_class": "",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.encrypted": "0",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.osd_id": "2",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.type": "block",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:                 "ceph.vdo": "0"
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             },
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "type": "block",
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:             "vg_name": "ceph_vg2"
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:         }
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]:     ]
Nov 26 12:47:54 compute-0 wonderful_darwin[194832]: }
Nov 26 12:47:54 compute-0 systemd[1]: libpod-ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682.scope: Deactivated successfully.
Nov 26 12:47:54 compute-0 podman[194817]: 2025-11-26 12:47:54.059097897 +0000 UTC m=+0.824754504 container died ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:47:54 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-4981feae16d86dedf11a6a69e50f4ababa7c65e64e69bef4a1183d6c70654b57-merged.mount: Deactivated successfully.
Nov 26 12:47:54 compute-0 podman[194817]: 2025-11-26 12:47:54.103419253 +0000 UTC m=+0.869075859 container remove ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 12:47:54 compute-0 systemd[1]: libpod-conmon-ba489a161fd716ed594fe68f1655483737b297dea734e383338ecf625faf5682.scope: Deactivated successfully.
Nov 26 12:47:54 compute-0 sudo[194594]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:54 compute-0 sudo[195039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:47:54 compute-0 sudo[195039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:54 compute-0 sudo[195039]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:54 compute-0 sudo[195064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:47:54 compute-0 sudo[195064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:54 compute-0 sudo[195064]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:54 compute-0 python3.9[195028]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:47:54 compute-0 sudo[195089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:47:54 compute-0 sudo[195089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:54 compute-0 sudo[195089]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:54 compute-0 sudo[195117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:47:54 compute-0 sudo[195117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:54 compute-0 sudo[195024]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:54 compute-0 podman[195294]: 2025-11-26 12:47:54.674547923 +0000 UTC m=+0.045243464 container create 363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:47:54 compute-0 sudo[195331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pthjsikozspmadzwhltnvrufkyloqgym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161274.4622424-365-245390416025332/AnsiballZ_systemd.py'
Nov 26 12:47:54 compute-0 sudo[195331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:47:54 compute-0 systemd[1]: Started libpod-conmon-363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9.scope.
Nov 26 12:47:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:47:54 compute-0 podman[195294]: 2025-11-26 12:47:54.73273528 +0000 UTC m=+0.103430841 container init 363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tharp, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:47:54 compute-0 podman[195294]: 2025-11-26 12:47:54.741848583 +0000 UTC m=+0.112544134 container start 363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tharp, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:47:54 compute-0 podman[195294]: 2025-11-26 12:47:54.743627868 +0000 UTC m=+0.114323408 container attach 363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tharp, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:47:54 compute-0 infallible_tharp[195337]: 167 167
Nov 26 12:47:54 compute-0 systemd[1]: libpod-363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9.scope: Deactivated successfully.
Nov 26 12:47:54 compute-0 podman[195294]: 2025-11-26 12:47:54.746364547 +0000 UTC m=+0.117060088 container died 363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tharp, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:47:54 compute-0 podman[195294]: 2025-11-26 12:47:54.657082904 +0000 UTC m=+0.027778445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:47:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e44fc9266bc7ec6163792b238907d1ce38793ae1e2b1161bb7f7f63fe324158-merged.mount: Deactivated successfully.
Nov 26 12:47:54 compute-0 podman[195294]: 2025-11-26 12:47:54.768703596 +0000 UTC m=+0.139399137 container remove 363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tharp, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:47:54 compute-0 systemd[1]: libpod-conmon-363cbd8c6f593d7ac4a44f9b431a23d547e87b9357e5ce71bb4c112464c6efd9.scope: Deactivated successfully.
Nov 26 12:47:54 compute-0 podman[195359]: 2025-11-26 12:47:54.919661727 +0000 UTC m=+0.037855244 container create 3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 12:47:54 compute-0 python3.9[195336]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:47:54 compute-0 systemd[1]: Started libpod-conmon-3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1.scope.
Nov 26 12:47:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e06cc61bf2c272855563ff0de0e225132f83d6901d97b2f2fd11c5fc558f0c16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:47:55 compute-0 podman[195359]: 2025-11-26 12:47:54.904922416 +0000 UTC m=+0.023115954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e06cc61bf2c272855563ff0de0e225132f83d6901d97b2f2fd11c5fc558f0c16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e06cc61bf2c272855563ff0de0e225132f83d6901d97b2f2fd11c5fc558f0c16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e06cc61bf2c272855563ff0de0e225132f83d6901d97b2f2fd11c5fc558f0c16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:47:55 compute-0 podman[195359]: 2025-11-26 12:47:55.018353652 +0000 UTC m=+0.136547170 container init 3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:47:55 compute-0 podman[195359]: 2025-11-26 12:47:55.025335607 +0000 UTC m=+0.143529124 container start 3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:47:55 compute-0 podman[195359]: 2025-11-26 12:47:55.029804302 +0000 UTC m=+0.147997820 container attach 3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 12:47:55 compute-0 systemd[1]: Reloading.
Nov 26 12:47:55 compute-0 systemd-rc-local-generator[195399]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:47:55 compute-0 systemd-sysv-generator[195403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:47:55 compute-0 ceph-mon[74966]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:55 compute-0 sudo[195331]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:55 compute-0 sudo[195566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-typkobkhndkvmncdiwpdsqghpxrcddnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161275.4727628-401-220595442499513/AnsiballZ_systemd.py'
Nov 26 12:47:55 compute-0 sudo[195566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]: {
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:         "osd_id": 1,
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:         "type": "bluestore"
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:     },
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:         "osd_id": 2,
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:         "type": "bluestore"
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:     },
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:         "osd_id": 0,
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:         "type": "bluestore"
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]:     }
Nov 26 12:47:55 compute-0 intelligent_maxwell[195373]: }
Nov 26 12:47:55 compute-0 systemd[1]: libpod-3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1.scope: Deactivated successfully.
Nov 26 12:47:55 compute-0 python3.9[195571]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 12:47:55 compute-0 podman[195595]: 2025-11-26 12:47:55.945516829 +0000 UTC m=+0.026849414 container died 3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:47:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e06cc61bf2c272855563ff0de0e225132f83d6901d97b2f2fd11c5fc558f0c16-merged.mount: Deactivated successfully.
Nov 26 12:47:55 compute-0 podman[195595]: 2025-11-26 12:47:55.989031683 +0000 UTC m=+0.070364248 container remove 3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 12:47:55 compute-0 systemd[1]: Reloading.
Nov 26 12:47:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:47:56 compute-0 sudo[195117]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:47:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:47:56 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:47:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:47:56 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev cf3880a9-3aff-4caa-a930-29ff21a1e619 does not exist
Nov 26 12:47:56 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev b5694425-2467-49cd-becd-635f578ef9bd does not exist
Nov 26 12:47:56 compute-0 systemd-rc-local-generator[195632]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:47:56 compute-0 systemd-sysv-generator[195635]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:47:56 compute-0 systemd[1]: libpod-conmon-3ab7d33452a306818f2974ce3f4871b6a3f6dacc07c6e9d61015faba5775bfd1.scope: Deactivated successfully.
Nov 26 12:47:56 compute-0 sudo[195642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:47:56 compute-0 sudo[195642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:56 compute-0 sudo[195642]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:56 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 26 12:47:56 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 26 12:47:56 compute-0 sudo[195672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:47:56 compute-0 sudo[195566]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:56 compute-0 sudo[195672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:47:56 compute-0 sudo[195672]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:56 compute-0 sudo[195846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psiwnlmasblkeauybzjubywxhsieahgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161276.4684167-409-168793661041380/AnsiballZ_systemd.py'
Nov 26 12:47:56 compute-0 sudo[195846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:47:56 compute-0 python3.9[195848]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:47:57 compute-0 sudo[195846]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:47:57 compute-0 ceph-mon[74966]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:47:57 compute-0 sudo[196001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgknbjrmhysgstgzfolfvmjdlhfjewiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161277.1162703-409-36815965458188/AnsiballZ_systemd.py'
Nov 26 12:47:57 compute-0 sudo[196001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:47:57 compute-0 python3.9[196003]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:47:58 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:58 compute-0 sudo[196001]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:58 compute-0 sudo[196156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqavayvnlhzrukhojtdwvudgxdukpxyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161278.7530427-409-246060587319597/AnsiballZ_systemd.py'
Nov 26 12:47:58 compute-0 sudo[196156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:47:59 compute-0 ceph-mon[74966]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:47:59 compute-0 python3.9[196158]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:47:59 compute-0 sudo[196156]: pam_unix(sudo:session): session closed for user root
Nov 26 12:47:59 compute-0 sudo[196311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxevwiqndtosdqdjmnorgeziqmwwqguf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161279.403185-409-70377524975738/AnsiballZ_systemd.py'
Nov 26 12:47:59 compute-0 sudo[196311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:47:59 compute-0 python3.9[196313]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:47:59 compute-0 sudo[196311]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:00 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:00 compute-0 sudo[196466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbtsltzltzqtqpwyyzozvasijapmhkcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161280.0105648-409-215526941046896/AnsiballZ_systemd.py'
Nov 26 12:48:00 compute-0 sudo[196466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:00 compute-0 python3.9[196468]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:48:00 compute-0 sudo[196466]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:00 compute-0 sudo[196621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vagopfopxtbwqtfbkkzepgclbhaqmcss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161280.6434677-409-6179658964258/AnsiballZ_systemd.py'
Nov 26 12:48:00 compute-0 sudo[196621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:48:01 compute-0 ceph-mon[74966]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:01 compute-0 python3.9[196623]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:48:01 compute-0 sudo[196621]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:01 compute-0 sudo[196776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haexgphpfbmgkgnsrtbponrsnbvfizxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161281.250579-409-102411468147133/AnsiballZ_systemd.py'
Nov 26 12:48:01 compute-0 sudo[196776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:01 compute-0 python3.9[196778]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:48:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:48:01.722 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:48:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:48:01.723 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:48:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:48:01.723 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:48:01 compute-0 sudo[196776]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:02 compute-0 sudo[196940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkbrfmjbkelbjdqfihpodhtfzzefymku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161281.8377604-409-175318030675969/AnsiballZ_systemd.py'
Nov 26 12:48:02 compute-0 sudo[196940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:02 compute-0 podman[196905]: 2025-11-26 12:48:02.054974894 +0000 UTC m=+0.048004119 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 26 12:48:02 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:02 compute-0 python3.9[196948]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:48:02 compute-0 sudo[196940]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:02 compute-0 sudo[197103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htyyufdjkiorrzvwnhbwoakwntbdlzoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161282.458332-409-272631507783154/AnsiballZ_systemd.py'
Nov 26 12:48:02 compute-0 sudo[197103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:02 compute-0 python3.9[197105]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:48:02 compute-0 sudo[197103]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:03 compute-0 ceph-mon[74966]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:03 compute-0 sudo[197258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtzyzxmxwatakfujlhdstvgkkcjyijfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161283.08623-409-206063541999812/AnsiballZ_systemd.py'
Nov 26 12:48:03 compute-0 sudo[197258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:03 compute-0 python3.9[197260]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:48:03 compute-0 sudo[197258]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:03 compute-0 sudo[197413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlxsupnjyetlewbjctllghgsrfdsztea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161283.714362-409-3694933094103/AnsiballZ_systemd.py'
Nov 26 12:48:03 compute-0 sudo[197413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:04 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:04 compute-0 python3.9[197415]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:48:04 compute-0 sudo[197413]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:04 compute-0 sudo[197568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsnpepdfqudtbjtkqwvadkcdvjswsyrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161284.3590894-409-62479105966937/AnsiballZ_systemd.py'
Nov 26 12:48:04 compute-0 sudo[197568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:04 compute-0 python3.9[197570]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:48:04 compute-0 sudo[197568]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:05 compute-0 ceph-mon[74966]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:05 compute-0 sudo[197723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyhtuwhhqlexsugsjxhinnibnbnxgowx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161284.9897313-409-200155750273103/AnsiballZ_systemd.py'
Nov 26 12:48:05 compute-0 sudo[197723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:05 compute-0 python3.9[197725]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:48:05 compute-0 sudo[197723]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:05 compute-0 sudo[197878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oymaefnoodkwatuuwfiumyfhyklwzwzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161285.6171892-409-213189452345580/AnsiballZ_systemd.py'
Nov 26 12:48:05 compute-0 sudo[197878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:48:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:48:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:48:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:48:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:48:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:48:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:48:06 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:06 compute-0 python3.9[197880]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 12:48:06 compute-0 sudo[197878]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:06 compute-0 sudo[198033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thhaegyfutrkpxuliplszsoundiwdgrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161286.429187-511-130432199891657/AnsiballZ_file.py'
Nov 26 12:48:06 compute-0 sudo[198033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:06 compute-0 python3.9[198035]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:48:06 compute-0 sudo[198033]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:07 compute-0 sudo[198185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsndlmrigpwyisxvlcmiwvsueclbtfer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161286.9035285-511-192885770997888/AnsiballZ_file.py'
Nov 26 12:48:07 compute-0 sudo[198185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:07 compute-0 ceph-mon[74966]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:07 compute-0 python3.9[198187]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:48:07 compute-0 sudo[198185]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:07 compute-0 sudo[198337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trjtgbdmuyrkuxcaiwfctvqeyftoosov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161287.354258-511-252861523185671/AnsiballZ_file.py'
Nov 26 12:48:07 compute-0 sudo[198337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:07 compute-0 python3.9[198339]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:48:07 compute-0 sudo[198337]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:07 compute-0 sudo[198489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkqpnylerkzdxewfpxwkjjdambcnuhyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161287.8060515-511-278804054429891/AnsiballZ_file.py'
Nov 26 12:48:07 compute-0 sudo[198489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:08 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:08 compute-0 python3.9[198491]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:48:08 compute-0 sudo[198489]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:08 compute-0 sudo[198641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olphsoguwfqiqixbhjadpnzkqeuimaox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161288.237167-511-47479814394898/AnsiballZ_file.py'
Nov 26 12:48:08 compute-0 sudo[198641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:08 compute-0 python3.9[198643]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:48:08 compute-0 sudo[198641]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:08 compute-0 sudo[198793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlgnlmegabaxjmqnrvwdbvueeultqsor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161288.6583972-511-59529269207885/AnsiballZ_file.py'
Nov 26 12:48:08 compute-0 sudo[198793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:08 compute-0 python3.9[198795]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:48:09 compute-0 sudo[198793]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:09 compute-0 ceph-mon[74966]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:09 compute-0 sudo[198945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfqhcximznalpzktawlmsmchuhszvavd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161289.1314797-554-42520484126253/AnsiballZ_stat.py'
Nov 26 12:48:09 compute-0 sudo[198945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:09 compute-0 python3.9[198947]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:09 compute-0 sudo[198945]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:09 compute-0 sudo[199070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjiqnrmjxseyxpvqybjvvsyjzediebuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161289.1314797-554-42520484126253/AnsiballZ_copy.py'
Nov 26 12:48:09 compute-0 sudo[199070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:10 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:10 compute-0 python3.9[199072]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161289.1314797-554-42520484126253/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:10 compute-0 sudo[199070]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:10 compute-0 sudo[199222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooismobjgbsmeicuzwtvgncrbujpxekf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161290.2104938-554-219758987890700/AnsiballZ_stat.py'
Nov 26 12:48:10 compute-0 sudo[199222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:10 compute-0 auditd[670]: Audit daemon rotating log files
Nov 26 12:48:10 compute-0 python3.9[199224]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:10 compute-0 sudo[199222]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:10 compute-0 sudo[199347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljrbutcyyaczdlooeibjfqnslzxvfbai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161290.2104938-554-219758987890700/AnsiballZ_copy.py'
Nov 26 12:48:10 compute-0 sudo[199347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:10 compute-0 python3.9[199349]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161290.2104938-554-219758987890700/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:10 compute-0 sudo[199347]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:48:11 compute-0 ceph-mon[74966]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:11 compute-0 sudo[199499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nextnyessxgxlbpaklvshvilsrytrnhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161291.0333848-554-268677477425835/AnsiballZ_stat.py'
Nov 26 12:48:11 compute-0 sudo[199499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:11 compute-0 python3.9[199501]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:11 compute-0 sudo[199499]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:11 compute-0 sudo[199624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmktedaepwxyarovcjzcrdvzmcwuuxqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161291.0333848-554-268677477425835/AnsiballZ_copy.py'
Nov 26 12:48:11 compute-0 sudo[199624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:11 compute-0 python3.9[199626]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161291.0333848-554-268677477425835/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:11 compute-0 sudo[199624]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:12 compute-0 sudo[199776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcdaqshpbdhtgjpwncwlgtjpygwhxwzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161291.8748186-554-270317394101025/AnsiballZ_stat.py'
Nov 26 12:48:12 compute-0 sudo[199776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:12 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:12 compute-0 podman[199778]: 2025-11-26 12:48:12.137252043 +0000 UTC m=+0.068872164 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 12:48:12 compute-0 python3.9[199779]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:12 compute-0 sudo[199776]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:12 compute-0 sudo[199924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgpwdqjakbziyqabsonlgyfkgytfivfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161291.8748186-554-270317394101025/AnsiballZ_copy.py'
Nov 26 12:48:12 compute-0 sudo[199924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:12 compute-0 python3.9[199926]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161291.8748186-554-270317394101025/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:12 compute-0 sudo[199924]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:12 compute-0 sudo[200076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqyyjjwojdqyilqdbnsfiwhltxqmxxom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161292.726138-554-1643060450308/AnsiballZ_stat.py'
Nov 26 12:48:12 compute-0 sudo[200076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:13 compute-0 python3.9[200078]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:13 compute-0 ceph-mon[74966]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:13 compute-0 sudo[200076]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:13 compute-0 sudo[200201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrzttbftgpazuhognvzsyxjsndaxhfnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161292.726138-554-1643060450308/AnsiballZ_copy.py'
Nov 26 12:48:13 compute-0 sudo[200201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:13 compute-0 python3.9[200203]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161292.726138-554-1643060450308/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:13 compute-0 sudo[200201]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:13 compute-0 sudo[200353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlwvxhnlxzeehvvaosokuzrgtscvaehe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161293.582661-554-146285844496140/AnsiballZ_stat.py'
Nov 26 12:48:13 compute-0 sudo[200353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:13 compute-0 python3.9[200355]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:13 compute-0 sudo[200353]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:14 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:14 compute-0 sudo[200478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyrodgwuxhhovdnmnzwavosgqdznfvds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161293.582661-554-146285844496140/AnsiballZ_copy.py'
Nov 26 12:48:14 compute-0 sudo[200478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:14 compute-0 python3.9[200480]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161293.582661-554-146285844496140/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:14 compute-0 sudo[200478]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:14 compute-0 sudo[200630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxkfztbselwjyquqlzzxlcgtlojtbdna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161294.4155183-554-216187874434389/AnsiballZ_stat.py'
Nov 26 12:48:14 compute-0 sudo[200630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:14 compute-0 python3.9[200632]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:14 compute-0 sudo[200630]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:14 compute-0 sudo[200753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eszlnojhrsmfkqmldlejyfybwdazsiqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161294.4155183-554-216187874434389/AnsiballZ_copy.py'
Nov 26 12:48:14 compute-0 sudo[200753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:15 compute-0 ceph-mon[74966]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:15 compute-0 python3.9[200755]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161294.4155183-554-216187874434389/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:15 compute-0 sudo[200753]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:15 compute-0 sudo[200905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ratnoatdrjsfqdwzkelurbxicpspqmhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161295.231182-554-74235149052785/AnsiballZ_stat.py'
Nov 26 12:48:15 compute-0 sudo[200905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:15 compute-0 python3.9[200907]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:15 compute-0 sudo[200905]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:15 compute-0 sudo[201030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcusnrfdsconyqlfzbzjubqyabgekblh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161295.231182-554-74235149052785/AnsiballZ_copy.py'
Nov 26 12:48:15 compute-0 sudo[201030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:15 compute-0 python3.9[201032]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764161295.231182-554-74235149052785/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:15 compute-0 sudo[201030]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:48:16 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:16 compute-0 sudo[201182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdrxsrkpjguvgvqwuaqxrmpginopqzme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161296.099732-667-232035335697445/AnsiballZ_command.py'
Nov 26 12:48:16 compute-0 sudo[201182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:16 compute-0 python3.9[201184]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 26 12:48:16 compute-0 sudo[201182]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:16 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 12:48:16 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5487 writes, 23K keys, 5487 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5487 writes, 835 syncs, 6.57 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5487 writes, 23K keys, 5487 commit groups, 1.0 writes per commit group, ingest: 18.42 MB, 0.03 MB/s
                                           Interval WAL: 5487 writes, 835 syncs, 6.57 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 12:48:16 compute-0 sudo[201335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pomvdmzqfovcttmxgjemxtqejeywshtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161296.5940485-676-158518245431897/AnsiballZ_file.py'
Nov 26 12:48:16 compute-0 sudo[201335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:16 compute-0 python3.9[201337]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:16 compute-0 sudo[201335]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:17 compute-0 ceph-mon[74966]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:17 compute-0 sudo[201487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymjcllttkmwvyvlddeqkltueagiogjtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161297.0378296-676-148024737783558/AnsiballZ_file.py'
Nov 26 12:48:17 compute-0 sudo[201487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:17 compute-0 python3.9[201489]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:17 compute-0 sudo[201487]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:17 compute-0 sudo[201639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unopzpcsiawesdxnmpmybajguykmkygs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161297.5080185-676-7469820822215/AnsiballZ_file.py'
Nov 26 12:48:17 compute-0 sudo[201639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:17 compute-0 python3.9[201641]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:17 compute-0 sudo[201639]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:18 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:18 compute-0 sudo[201791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtklsgcwyycqkvbevuyhfptqraejxvos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161297.9806352-676-183371047859326/AnsiballZ_file.py'
Nov 26 12:48:18 compute-0 sudo[201791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:18 compute-0 python3.9[201793]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:18 compute-0 sudo[201791]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:18 compute-0 sudo[201943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yufoesmhlnfekxhvvapidcrxdcivolaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161298.4581754-676-114199948939673/AnsiballZ_file.py'
Nov 26 12:48:18 compute-0 sudo[201943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:18 compute-0 python3.9[201945]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:18 compute-0 sudo[201943]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:19 compute-0 ceph-mon[74966]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:19 compute-0 sudo[202095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sogsyqbwcnplcxywbgnfdozbrdtwvxdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161298.9185524-676-213755836192009/AnsiballZ_file.py'
Nov 26 12:48:19 compute-0 sudo[202095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:19 compute-0 python3.9[202097]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:19 compute-0 sudo[202095]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:19 compute-0 sudo[202247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qonxahqhslokhrdwmkueckngdyyytnkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161299.379573-676-193143495299581/AnsiballZ_file.py'
Nov 26 12:48:19 compute-0 sudo[202247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:19 compute-0 python3.9[202249]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:19 compute-0 sudo[202247]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:20 compute-0 sudo[202399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqddpohyobicdgianjfjtfukzssdwnko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161299.8214111-676-129046910905963/AnsiballZ_file.py'
Nov 26 12:48:20 compute-0 sudo[202399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:20 compute-0 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 12:48:20 compute-0 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6699 writes, 27K keys, 6699 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6699 writes, 1243 syncs, 5.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6699 writes, 27K keys, 6699 commit groups, 1.0 writes per commit group, ingest: 19.36 MB, 0.03 MB/s
                                           Interval WAL: 6699 writes, 1243 syncs, 5.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 12:48:20 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:20 compute-0 python3.9[202401]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:20 compute-0 sudo[202399]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:20 compute-0 sudo[202551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmostalfmfzlkqbvjexezttkbiuqdiyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161300.280506-676-149009418594228/AnsiballZ_file.py'
Nov 26 12:48:20 compute-0 sudo[202551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:20 compute-0 python3.9[202553]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:20 compute-0 sudo[202551]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:20 compute-0 sudo[202703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toilzrapkzvpchlruuashlaqltxnvkda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161300.7269533-676-143209582628420/AnsiballZ_file.py'
Nov 26 12:48:20 compute-0 sudo[202703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:48:21 compute-0 python3.9[202705]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:21 compute-0 sudo[202703]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:21 compute-0 ceph-mon[74966]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:21 compute-0 sudo[202855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-folykeuyjqlblnequwagzgmmamspipkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161301.1824028-676-201261875468190/AnsiballZ_file.py'
Nov 26 12:48:21 compute-0 sudo[202855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:21 compute-0 python3.9[202857]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:21 compute-0 sudo[202855]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:21 compute-0 sudo[203007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqqwdynurnefakfrvoumbuterjzhdsyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161301.6537893-676-212759088315311/AnsiballZ_file.py'
Nov 26 12:48:21 compute-0 sudo[203007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:22 compute-0 python3.9[203009]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:22 compute-0 sudo[203007]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:22 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:22 compute-0 sudo[203159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjzbdwbmswagkozmjpcsevdldimvadjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161302.154494-676-91247284580132/AnsiballZ_file.py'
Nov 26 12:48:22 compute-0 sudo[203159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:22 compute-0 python3.9[203161]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:22 compute-0 sudo[203159]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:22 compute-0 sudo[203311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adkepxewlfturfxrkivckfyipngkdcyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161302.6367593-676-6210419544185/AnsiballZ_file.py'
Nov 26 12:48:22 compute-0 sudo[203311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:22 compute-0 python3.9[203313]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:23 compute-0 sudo[203311]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:23 compute-0 ceph-mon[74966]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:23 compute-0 sudo[203463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmvfuoqvniveobefnpvcczpkkbckjtiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161303.1603756-775-154261401173795/AnsiballZ_stat.py'
Nov 26 12:48:23 compute-0 sudo[203463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:23 compute-0 python3.9[203465]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:23 compute-0 sudo[203463]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:23 compute-0 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 12:48:23 compute-0 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5527 writes, 23K keys, 5527 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5527 writes, 849 syncs, 6.51 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5527 writes, 23K keys, 5527 commit groups, 1.0 writes per commit group, ingest: 18.26 MB, 0.03 MB/s
                                           Interval WAL: 5527 writes, 849 syncs, 6.51 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 12:48:23 compute-0 sudo[203586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcksxamefnrngkizyziwoxogqxgmcipx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161303.1603756-775-154261401173795/AnsiballZ_copy.py'
Nov 26 12:48:23 compute-0 sudo[203586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:23 compute-0 python3.9[203588]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161303.1603756-775-154261401173795/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:23 compute-0 sudo[203586]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:24 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:24 compute-0 sudo[203738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnsdiaefagcdikohjkwcgqupuzitiyxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161304.0797348-775-189164635029203/AnsiballZ_stat.py'
Nov 26 12:48:24 compute-0 sudo[203738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:24 compute-0 python3.9[203740]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:24 compute-0 sudo[203738]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:24 compute-0 sudo[203861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhmtchlphdypojyblmjidcecshvtasvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161304.0797348-775-189164635029203/AnsiballZ_copy.py'
Nov 26 12:48:24 compute-0 sudo[203861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:24 compute-0 ceph-mgr[75236]: [devicehealth INFO root] Check health
Nov 26 12:48:24 compute-0 python3.9[203863]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161304.0797348-775-189164635029203/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:24 compute-0 sudo[203861]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:25 compute-0 ceph-mon[74966]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:25 compute-0 sudo[204013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlvrytqyvrberlvyxsercspyqlroitkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161304.988628-775-215286767773363/AnsiballZ_stat.py'
Nov 26 12:48:25 compute-0 sudo[204013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:25 compute-0 python3.9[204015]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:25 compute-0 sudo[204013]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:25 compute-0 sudo[204136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqjvzpktmethofbsnaagdfgfvlooicja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161304.988628-775-215286767773363/AnsiballZ_copy.py'
Nov 26 12:48:25 compute-0 sudo[204136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:25 compute-0 python3.9[204138]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161304.988628-775-215286767773363/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:25 compute-0 sudo[204136]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:48:26 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:26 compute-0 sudo[204288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erfydlvfuxwfoxlofbghdmgmtakwdwvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161305.9421377-775-17702914241174/AnsiballZ_stat.py'
Nov 26 12:48:26 compute-0 sudo[204288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:26 compute-0 python3.9[204290]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:26 compute-0 sudo[204288]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:26 compute-0 sudo[204411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbrupjgzfsklnmluwjzuvzyvvjppvevr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161305.9421377-775-17702914241174/AnsiballZ_copy.py'
Nov 26 12:48:26 compute-0 sudo[204411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:26 compute-0 python3.9[204413]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161305.9421377-775-17702914241174/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:26 compute-0 sudo[204411]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:27 compute-0 sudo[204563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpprygvqzaaktfqfjckwhrshcbcfexht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161306.8401961-775-164531440223944/AnsiballZ_stat.py'
Nov 26 12:48:27 compute-0 sudo[204563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:27 compute-0 ceph-mon[74966]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:27 compute-0 python3.9[204565]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:27 compute-0 sudo[204563]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:27 compute-0 sudo[204686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gclujbubevdklehoozkkwgprxyjoufxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161306.8401961-775-164531440223944/AnsiballZ_copy.py'
Nov 26 12:48:27 compute-0 sudo[204686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:27 compute-0 python3.9[204688]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161306.8401961-775-164531440223944/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:27 compute-0 sudo[204686]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:27 compute-0 sudo[204838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgzepdwgsncjykmxxbimfvhotfuhvplf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161307.7694519-775-10808284215559/AnsiballZ_stat.py'
Nov 26 12:48:27 compute-0 sudo[204838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:28 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:28 compute-0 python3.9[204840]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:28 compute-0 sudo[204838]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:28 compute-0 sudo[204961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcfpxnyiiselrbphjyumeowkqoarperx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161307.7694519-775-10808284215559/AnsiballZ_copy.py'
Nov 26 12:48:28 compute-0 sudo[204961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:28 compute-0 python3.9[204963]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161307.7694519-775-10808284215559/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:28 compute-0 sudo[204961]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:28 compute-0 sudo[205113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofzjkfgmjivfwxsurhqebnmeyivhtrkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161308.7312834-775-176864826569364/AnsiballZ_stat.py'
Nov 26 12:48:28 compute-0 sudo[205113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:29 compute-0 python3.9[205115]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:29 compute-0 sudo[205113]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:29 compute-0 ceph-mon[74966]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:29 compute-0 sudo[205236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvbmhnrhudzpxhuxuzpddstqesflzwgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161308.7312834-775-176864826569364/AnsiballZ_copy.py'
Nov 26 12:48:29 compute-0 sudo[205236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:29 compute-0 python3.9[205238]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161308.7312834-775-176864826569364/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:29 compute-0 sudo[205236]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:29 compute-0 sudo[205388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmfvhgrhxwcrhffibiefjzzqtgbamnqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161309.6708312-775-228413769703142/AnsiballZ_stat.py'
Nov 26 12:48:29 compute-0 sudo[205388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:30 compute-0 python3.9[205390]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:30 compute-0 sudo[205388]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:30 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:30 compute-0 sudo[205511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozxjbannmgpkophsbjfpcdkxuscqutsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161309.6708312-775-228413769703142/AnsiballZ_copy.py'
Nov 26 12:48:30 compute-0 sudo[205511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:30 compute-0 python3.9[205513]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161309.6708312-775-228413769703142/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:30 compute-0 sudo[205511]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:30 compute-0 sudo[205663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfrspuuqzuxpouirznvmjnvhfbkxxyiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161310.6079724-775-86166847146376/AnsiballZ_stat.py'
Nov 26 12:48:30 compute-0 sudo[205663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:30 compute-0 python3.9[205665]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:30 compute-0 sudo[205663]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:48:31 compute-0 ceph-mon[74966]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:31 compute-0 sudo[205786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhjglgrdijvbgyzihezylonjueuaxpmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161310.6079724-775-86166847146376/AnsiballZ_copy.py'
Nov 26 12:48:31 compute-0 sudo[205786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:31 compute-0 python3.9[205788]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161310.6079724-775-86166847146376/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:31 compute-0 sudo[205786]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:31 compute-0 sudo[205938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdpxnhjhpkemgiqwcwxnrphrdnncemud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161311.501768-775-187592748376917/AnsiballZ_stat.py'
Nov 26 12:48:31 compute-0 sudo[205938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:31 compute-0 python3.9[205940]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:31 compute-0 sudo[205938]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:32 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:32 compute-0 sudo[206069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxoibynfncmlvdcpdcnefdvbvebbzpfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161311.501768-775-187592748376917/AnsiballZ_copy.py'
Nov 26 12:48:32 compute-0 sudo[206069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:32 compute-0 podman[206035]: 2025-11-26 12:48:32.166363632 +0000 UTC m=+0.056710996 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 26 12:48:32 compute-0 python3.9[206078]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161311.501768-775-187592748376917/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:32 compute-0 sudo[206069]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:32 compute-0 sudo[206230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukgtkurykyofrzukaqpgxhlougqkskle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161312.4587128-775-205140732168498/AnsiballZ_stat.py'
Nov 26 12:48:32 compute-0 sudo[206230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:32 compute-0 python3.9[206232]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:32 compute-0 sudo[206230]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:33 compute-0 sudo[206353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owuhhwzzjunmgdxwbdfiphtmgqlkpyhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161312.4587128-775-205140732168498/AnsiballZ_copy.py'
Nov 26 12:48:33 compute-0 sudo[206353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:33 compute-0 ceph-mon[74966]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:33 compute-0 python3.9[206355]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161312.4587128-775-205140732168498/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:33 compute-0 sudo[206353]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:33 compute-0 sudo[206505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfkcfrslspevfrlpivgrtksqtuccutxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161313.4011762-775-111106744894191/AnsiballZ_stat.py'
Nov 26 12:48:33 compute-0 sudo[206505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:33 compute-0 python3.9[206507]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:33 compute-0 sudo[206505]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:34 compute-0 sudo[206628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljzuskyeiyslvzvoplhntbkojxhbyjyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161313.4011762-775-111106744894191/AnsiballZ_copy.py'
Nov 26 12:48:34 compute-0 sudo[206628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:34 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:34 compute-0 python3.9[206630]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161313.4011762-775-111106744894191/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:34 compute-0 sudo[206628]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:34 compute-0 sudo[206780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lldyacsfkoenbzhujtdfzmjhbykkswnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161314.3266828-775-43726961421276/AnsiballZ_stat.py'
Nov 26 12:48:34 compute-0 sudo[206780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:34 compute-0 python3.9[206782]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:34 compute-0 sudo[206780]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:34 compute-0 sudo[206903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyqawgwcmchaddqldvwzaoircdcfnzeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161314.3266828-775-43726961421276/AnsiballZ_copy.py'
Nov 26 12:48:34 compute-0 sudo[206903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:35 compute-0 python3.9[206905]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161314.3266828-775-43726961421276/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:35 compute-0 sudo[206903]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:35 compute-0 ceph-mon[74966]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:35 compute-0 sudo[207055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxprxnfgclfyaxfgjeepmklfghypkatm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161315.1902616-775-121160419239618/AnsiballZ_stat.py'
Nov 26 12:48:35 compute-0 sudo[207055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:35 compute-0 python3.9[207057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:35 compute-0 sudo[207055]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:35 compute-0 sudo[207178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehobkrzhjfngrisbltipcboavkceggcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161315.1902616-775-121160419239618/AnsiballZ_copy.py'
Nov 26 12:48:35 compute-0 sudo[207178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:48:35
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'default.rgw.log', 'backups', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'volumes', 'vms']
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:48:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:48:35 compute-0 python3.9[207180]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161315.1902616-775-121160419239618/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:35 compute-0 sudo[207178]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:48:36 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:36 compute-0 python3.9[207330]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:48:36 compute-0 sudo[207483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqmmxbfpeycofaquxdlaynkpnkjqufaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161316.5775583-981-262567451140449/AnsiballZ_seboolean.py'
Nov 26 12:48:36 compute-0 sudo[207483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:37 compute-0 python3.9[207485]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 26 12:48:37 compute-0 ceph-mon[74966]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:37 compute-0 sudo[207483]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:38 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:38 compute-0 sudo[207639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdknyvfgdfnmathdwbmejbsesavroyme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161318.0265195-989-260867528948047/AnsiballZ_copy.py'
Nov 26 12:48:38 compute-0 dbus-broker-launch[767]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 26 12:48:38 compute-0 sudo[207639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:38 compute-0 python3.9[207641]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:38 compute-0 sudo[207639]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:38 compute-0 sudo[207791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkkuucvvqumbsldhbmkprbykrktqtlse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161318.4833539-989-123239324973241/AnsiballZ_copy.py'
Nov 26 12:48:38 compute-0 sudo[207791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:38 compute-0 python3.9[207793]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:38 compute-0 sudo[207791]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:39 compute-0 sudo[207943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifgkochcfcehriubvnzdpmteeknrdnvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161318.9036326-989-92557247795474/AnsiballZ_copy.py'
Nov 26 12:48:39 compute-0 sudo[207943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:39 compute-0 ceph-mon[74966]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:39 compute-0 python3.9[207945]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:39 compute-0 sudo[207943]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:39 compute-0 sudo[208095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqhauihlnahasfjkidmlfesdgoihucbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161319.3328488-989-280791378614688/AnsiballZ_copy.py'
Nov 26 12:48:39 compute-0 sudo[208095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:39 compute-0 python3.9[208097]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:39 compute-0 sudo[208095]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:39 compute-0 sudo[208247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukqbfjfyjwgcbmzebtqdaqwxkewbyama ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161319.7611926-989-271480342833004/AnsiballZ_copy.py'
Nov 26 12:48:39 compute-0 sudo[208247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:40 compute-0 python3.9[208249]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:40 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:40 compute-0 sudo[208247]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:40 compute-0 sudo[208399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioyjpeqqxwlsnoyexsqzwknksmlfgaxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161320.215902-1025-104701030591360/AnsiballZ_copy.py'
Nov 26 12:48:40 compute-0 sudo[208399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:40 compute-0 python3.9[208401]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:40 compute-0 sudo[208399]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:40 compute-0 sudo[208551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eodvyfzntllkzjjiiisufogrwnuslgsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161320.645225-1025-257324613730983/AnsiballZ_copy.py'
Nov 26 12:48:40 compute-0 sudo[208551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:40 compute-0 python3.9[208553]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:40 compute-0 sudo[208551]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:48:41 compute-0 ceph-mon[74966]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:41 compute-0 sudo[208703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxiwnedqbeznvwsdgcfapaveetqnpait ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161321.07226-1025-111517884036979/AnsiballZ_copy.py'
Nov 26 12:48:41 compute-0 sudo[208703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:41 compute-0 python3.9[208705]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:41 compute-0 sudo[208703]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:41 compute-0 sudo[208855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spptwcflofvjejaphgaxggyxrtutflfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161321.5118287-1025-18448792049237/AnsiballZ_copy.py'
Nov 26 12:48:41 compute-0 sudo[208855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:41 compute-0 python3.9[208857]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:41 compute-0 sudo[208855]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:42 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:42 compute-0 sudo[209007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsdpkklsgksvhbehebmoslizjzhenagd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161321.9474785-1025-88213398381853/AnsiballZ_copy.py'
Nov 26 12:48:42 compute-0 sudo[209007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:42 compute-0 python3.9[209009]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:42 compute-0 sudo[209007]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:42 compute-0 podman[209010]: 2025-11-26 12:48:42.374626667 +0000 UTC m=+0.064309701 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 26 12:48:42 compute-0 sudo[209183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfkvvrziayrjzathsulfbitktqcncfrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161322.437792-1061-197760059958796/AnsiballZ_systemd.py'
Nov 26 12:48:42 compute-0 sudo[209183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:42 compute-0 python3.9[209185]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:48:42 compute-0 systemd[1]: Reloading.
Nov 26 12:48:42 compute-0 systemd-sysv-generator[209213]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:48:42 compute-0 systemd-rc-local-generator[209209]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:48:43 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 26 12:48:43 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 26 12:48:43 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 26 12:48:43 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 26 12:48:43 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 26 12:48:43 compute-0 ceph-mon[74966]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:43 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 26 12:48:43 compute-0 sudo[209183]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:43 compute-0 sudo[209376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwfenuxigvbhlhmibppudrmrtetawhmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161323.3547459-1061-82677748843835/AnsiballZ_systemd.py'
Nov 26 12:48:43 compute-0 sudo[209376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:43 compute-0 python3.9[209378]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:48:43 compute-0 systemd[1]: Reloading.
Nov 26 12:48:43 compute-0 systemd-sysv-generator[209402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:48:43 compute-0 systemd-rc-local-generator[209399]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:48:44 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 26 12:48:44 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:44 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 26 12:48:44 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 26 12:48:44 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 26 12:48:44 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 26 12:48:44 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 26 12:48:44 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 26 12:48:44 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 26 12:48:44 compute-0 sudo[209376]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:44 compute-0 sudo[209592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqgrqfkugxkbvancotbudfqyjizowlxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161324.2566533-1061-59757366986746/AnsiballZ_systemd.py'
Nov 26 12:48:44 compute-0 sudo[209592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:44 compute-0 python3.9[209594]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:48:44 compute-0 systemd[1]: Reloading.
Nov 26 12:48:44 compute-0 systemd-sysv-generator[209618]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:48:44 compute-0 systemd-rc-local-generator[209614]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:48:44 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 26 12:48:45 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 26 12:48:45 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 26 12:48:45 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 26 12:48:45 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 26 12:48:45 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 26 12:48:45 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:48:45 compute-0 sudo[209592]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:48:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:48:45 compute-0 ceph-mon[74966]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:45 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 26 12:48:45 compute-0 sudo[209804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeidlrsykfamwyftppobidmqjjwpvenw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161325.1532571-1061-90832465090078/AnsiballZ_systemd.py'
Nov 26 12:48:45 compute-0 sudo[209804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:45 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 26 12:48:45 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 26 12:48:45 compute-0 python3.9[209806]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:48:45 compute-0 systemd[1]: Reloading.
Nov 26 12:48:45 compute-0 systemd-rc-local-generator[209836]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:48:45 compute-0 systemd-sysv-generator[209842]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:48:45 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 26 12:48:45 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 26 12:48:45 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 26 12:48:45 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 26 12:48:45 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 26 12:48:45 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 26 12:48:45 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 26 12:48:45 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 26 12:48:45 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 26 12:48:45 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 26 12:48:45 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 26 12:48:45 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 26 12:48:46 compute-0 sudo[209804]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:48:46 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:46 compute-0 setroubleshoot[209654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e216a4c2-173d-4083-97df-b5be5c9efb29
Nov 26 12:48:46 compute-0 setroubleshoot[209654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 26 12:48:46 compute-0 setroubleshoot[209654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e216a4c2-173d-4083-97df-b5be5c9efb29
Nov 26 12:48:46 compute-0 setroubleshoot[209654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 26 12:48:46 compute-0 sudo[210029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jstrcwckluwjbowtvibxaetsdxekdinh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161326.0949814-1061-200138299401069/AnsiballZ_systemd.py'
Nov 26 12:48:46 compute-0 sudo[210029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:46 compute-0 python3.9[210031]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:48:46 compute-0 systemd[1]: Reloading.
Nov 26 12:48:46 compute-0 systemd-sysv-generator[210056]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:48:46 compute-0 systemd-rc-local-generator[210053]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:48:46 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 26 12:48:46 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 26 12:48:46 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 26 12:48:46 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 26 12:48:46 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 26 12:48:46 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 26 12:48:46 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 26 12:48:46 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 26 12:48:46 compute-0 sudo[210029]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:47 compute-0 ceph-mon[74966]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:47 compute-0 sudo[210241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqzvznisbstakmrvdmsgmhrpzprcgjoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161327.0769815-1098-199213881826430/AnsiballZ_file.py'
Nov 26 12:48:47 compute-0 sudo[210241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:47 compute-0 python3.9[210243]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:47 compute-0 sudo[210241]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:47 compute-0 sudo[210393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpxcoxmtbjsrbmydrpbotirlmmvlnvit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161327.744306-1106-115731139539183/AnsiballZ_find.py'
Nov 26 12:48:47 compute-0 sudo[210393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:48 compute-0 python3.9[210395]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 12:48:48 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:48 compute-0 sudo[210393]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:48 compute-0 sudo[210545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wazxfmmmuzhdwipvomrhbknfjhkkhtuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161328.2160006-1114-202431994306662/AnsiballZ_command.py'
Nov 26 12:48:48 compute-0 sudo[210545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:48 compute-0 python3.9[210547]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:48:48 compute-0 sudo[210545]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:49 compute-0 python3.9[210701]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 12:48:49 compute-0 ceph-mon[74966]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:49 compute-0 python3.9[210851]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:50 compute-0 python3.9[210972]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161329.3506982-1133-267316216175564/.source.xml follow=False _original_basename=secret.xml.j2 checksum=0c169d2ad7f41d18088a4831ca21879b2a114042 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:50 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:50 compute-0 sudo[211122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cswryujiiadywotglobtlqyackhrmdqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161330.1772523-1148-57658364486455/AnsiballZ_command.py'
Nov 26 12:48:50 compute-0 sudo[211122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:50 compute-0 python3.9[211124]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine f7d7fe93-41e5-51c4-b72d-63b38686102e
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:48:50 compute-0 polkitd[43512]: Registered Authentication Agent for unix-process:211126:243213 (system bus name :1.2683 [pkttyagent --process 211126 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 26 12:48:50 compute-0 polkitd[43512]: Unregistered Authentication Agent for unix-process:211126:243213 (system bus name :1.2683, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 26 12:48:50 compute-0 polkitd[43512]: Registered Authentication Agent for unix-process:211125:243212 (system bus name :1.2684 [pkttyagent --process 211125 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 26 12:48:50 compute-0 polkitd[43512]: Unregistered Authentication Agent for unix-process:211125:243212 (system bus name :1.2684, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 26 12:48:50 compute-0 sudo[211122]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:48:51 compute-0 python3.9[211286]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:51 compute-0 ceph-mon[74966]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:51 compute-0 sudo[211436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrumgopxbfvfxrotrgbfvazsejalpajp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161331.2144022-1164-154911664921293/AnsiballZ_command.py'
Nov 26 12:48:51 compute-0 sudo[211436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:51 compute-0 sudo[211436]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:51 compute-0 sudo[211589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmenchdqsicemrracwrqvjdwwblxdihc ; FSID=f7d7fe93-41e5-51c4-b72d-63b38686102e KEY=AQBP9CZpAAAAABAAMO+aLuzMDoNYc4bplXQ8ZQ== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161331.6990838-1172-110328315745563/AnsiballZ_command.py'
Nov 26 12:48:51 compute-0 sudo[211589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:52 compute-0 polkitd[43512]: Registered Authentication Agent for unix-process:211592:243364 (system bus name :1.2687 [pkttyagent --process 211592 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 26 12:48:52 compute-0 polkitd[43512]: Unregistered Authentication Agent for unix-process:211592:243364 (system bus name :1.2687, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 26 12:48:52 compute-0 sudo[211589]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:52 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:52 compute-0 sudo[211747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-picrpyryleyckbirloolxbuzbmbbzdkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161332.211357-1180-236742072429693/AnsiballZ_copy.py'
Nov 26 12:48:52 compute-0 sudo[211747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:52 compute-0 python3.9[211749]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:52 compute-0 sudo[211747]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:52 compute-0 sudo[211899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaihwnlisehckauizzrkgohxjbnbqeyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161332.7046738-1188-214669036490072/AnsiballZ_stat.py'
Nov 26 12:48:52 compute-0 sudo[211899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:53 compute-0 python3.9[211901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:53 compute-0 sudo[211899]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:53 compute-0 ceph-mon[74966]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:53 compute-0 sudo[212022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvxrofxrexybgtpwrsxtmbowlatqapgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161332.7046738-1188-214669036490072/AnsiballZ_copy.py'
Nov 26 12:48:53 compute-0 sudo[212022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:53 compute-0 python3.9[212024]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161332.7046738-1188-214669036490072/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:53 compute-0 sudo[212022]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:53 compute-0 sudo[212174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rukvpnemxtohbhgekzvlslbkkjltmpqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161333.6686323-1204-165807241702431/AnsiballZ_file.py'
Nov 26 12:48:53 compute-0 sudo[212174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:54 compute-0 python3.9[212176]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:54 compute-0 sudo[212174]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:54 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:54 compute-0 sudo[212326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-untrvwkfjsirkatalxisiqnxfjxjkngr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161334.1293464-1212-40681825739086/AnsiballZ_stat.py'
Nov 26 12:48:54 compute-0 sudo[212326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:54 compute-0 python3.9[212328]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:54 compute-0 sudo[212326]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:54 compute-0 sudo[212404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcioplqtupdqszlfhzxrfivhggfolxbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161334.1293464-1212-40681825739086/AnsiballZ_file.py'
Nov 26 12:48:54 compute-0 sudo[212404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:54 compute-0 python3.9[212406]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:54 compute-0 sudo[212404]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:55 compute-0 sudo[212556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sinebqlxdnzzixdeupyzlqflmnchkjku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161334.9228604-1224-179146150699227/AnsiballZ_stat.py'
Nov 26 12:48:55 compute-0 sudo[212556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:55 compute-0 ceph-mon[74966]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:55 compute-0 python3.9[212558]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:55 compute-0 sudo[212556]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:55 compute-0 sudo[212634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvixwooqmtyyhjryvdxxlfmivpyomhdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161334.9228604-1224-179146150699227/AnsiballZ_file.py'
Nov 26 12:48:55 compute-0 sudo[212634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:55 compute-0 python3.9[212636]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4diplglj recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:55 compute-0 sudo[212634]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:55 compute-0 sudo[212786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrjvwainxpfaryqwyspacwoypxnysrzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161335.6790552-1236-263430613845512/AnsiballZ_stat.py'
Nov 26 12:48:55 compute-0 sudo[212786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:56 compute-0 python3.9[212788]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:48:56 compute-0 sudo[212786]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:56 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:56 compute-0 sudo[212864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmtkkxehomjrdgrmwkbcptyxzgyvgkkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161335.6790552-1236-263430613845512/AnsiballZ_file.py'
Nov 26 12:48:56 compute-0 sudo[212864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:56 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 26 12:48:56 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 26 12:48:56 compute-0 python3.9[212866]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:56 compute-0 sudo[212864]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:56 compute-0 sudo[212867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:48:56 compute-0 sudo[212867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:48:56 compute-0 sudo[212867]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:56 compute-0 sudo[212895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:48:56 compute-0 sudo[212895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:48:56 compute-0 sudo[212895]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:56 compute-0 sudo[212941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:48:56 compute-0 sudo[212941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:48:56 compute-0 sudo[212941]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:56 compute-0 sudo[212966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:48:56 compute-0 sudo[212966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:48:56 compute-0 sudo[213128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkzarvbngkctxprcdcsowxprdcdxvytd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161336.523384-1249-257308117118270/AnsiballZ_command.py'
Nov 26 12:48:56 compute-0 sudo[213128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:56 compute-0 python3.9[213130]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:48:56 compute-0 sudo[212966]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:56 compute-0 sudo[213128]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 26 12:48:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 12:48:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:48:56 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:48:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:48:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:48:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:48:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:48:56 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev d1aeefb3-ebf1-4310-9845-f5ea452a17d6 does not exist
Nov 26 12:48:56 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 793b7e18-ccf4-47d3-bcb6-dcacc4842087 does not exist
Nov 26 12:48:56 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev ffe4d145-daa7-4806-b1a8-3e1f66baa0a0 does not exist
Nov 26 12:48:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:48:56 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:48:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:48:56 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:48:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:48:56 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:48:56 compute-0 sudo[213161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:48:56 compute-0 sudo[213161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:48:56 compute-0 sudo[213161]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:57 compute-0 sudo[213198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:48:57 compute-0 sudo[213198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:48:57 compute-0 sudo[213198]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:57 compute-0 sudo[213246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:48:57 compute-0 sudo[213246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:48:57 compute-0 sudo[213246]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:57 compute-0 sudo[213300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:48:57 compute-0 sudo[213300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:48:57 compute-0 ceph-mon[74966]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 12:48:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:48:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:48:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:48:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:48:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:48:57 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:48:57 compute-0 sudo[213432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxtzjvohkxswscrmlcpnpnczayujbgld ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764161337.0147974-1257-60028816340177/AnsiballZ_edpm_nftables_from_files.py'
Nov 26 12:48:57 compute-0 sudo[213432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:57 compute-0 podman[213430]: 2025-11-26 12:48:57.353270379 +0000 UTC m=+0.040120536 container create e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 12:48:57 compute-0 systemd[1]: Started libpod-conmon-e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5.scope.
Nov 26 12:48:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:48:57 compute-0 podman[213430]: 2025-11-26 12:48:57.416337031 +0000 UTC m=+0.103187197 container init e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 26 12:48:57 compute-0 podman[213430]: 2025-11-26 12:48:57.422582915 +0000 UTC m=+0.109433071 container start e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:48:57 compute-0 podman[213430]: 2025-11-26 12:48:57.423724004 +0000 UTC m=+0.110574160 container attach e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 12:48:57 compute-0 suspicious_beaver[213446]: 167 167
Nov 26 12:48:57 compute-0 systemd[1]: libpod-e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5.scope: Deactivated successfully.
Nov 26 12:48:57 compute-0 podman[213430]: 2025-11-26 12:48:57.427063864 +0000 UTC m=+0.113914019 container died e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 12:48:57 compute-0 podman[213430]: 2025-11-26 12:48:57.339689671 +0000 UTC m=+0.026539846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:48:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-5355abdd380a8973eb7ad5306b20b108757f3b6597eb97a22b16a317899e75c2-merged.mount: Deactivated successfully.
Nov 26 12:48:57 compute-0 podman[213430]: 2025-11-26 12:48:57.449659742 +0000 UTC m=+0.136509898 container remove e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:48:57 compute-0 systemd[1]: libpod-conmon-e180de435edf3e19182b54c1665a3bda944550aed0dc2ba43bbd50d8b205c8b5.scope: Deactivated successfully.
Nov 26 12:48:57 compute-0 python3[213440]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 12:48:57 compute-0 sudo[213432]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:57 compute-0 podman[213467]: 2025-11-26 12:48:57.584797555 +0000 UTC m=+0.034276499 container create 78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_zhukovsky, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 12:48:57 compute-0 systemd[1]: Started libpod-conmon-78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b.scope.
Nov 26 12:48:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb53554b1fc749f9c61932d012341f6bedfb2da96d1601f6de520198d691b25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb53554b1fc749f9c61932d012341f6bedfb2da96d1601f6de520198d691b25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb53554b1fc749f9c61932d012341f6bedfb2da96d1601f6de520198d691b25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb53554b1fc749f9c61932d012341f6bedfb2da96d1601f6de520198d691b25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb53554b1fc749f9c61932d012341f6bedfb2da96d1601f6de520198d691b25/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:48:57 compute-0 podman[213467]: 2025-11-26 12:48:57.650570363 +0000 UTC m=+0.100049298 container init 78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 12:48:57 compute-0 podman[213467]: 2025-11-26 12:48:57.658372578 +0000 UTC m=+0.107851513 container start 78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_zhukovsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 12:48:57 compute-0 podman[213467]: 2025-11-26 12:48:57.659650395 +0000 UTC m=+0.109129329 container attach 78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:48:57 compute-0 podman[213467]: 2025-11-26 12:48:57.570866788 +0000 UTC m=+0.020345742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:48:57 compute-0 sudo[213635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybbmgonerlukwyvoobdfodlcocxacmzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161337.6752439-1265-128403339180720/AnsiballZ_stat.py'
Nov 26 12:48:57 compute-0 sudo[213635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:58 compute-0 python3.9[213637]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:58 compute-0 sudo[213635]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:58 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:58 compute-0 sudo[213713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tutuefihgmdkzrkrblahwpslwocogacs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161337.6752439-1265-128403339180720/AnsiballZ_file.py'
Nov 26 12:48:58 compute-0 sudo[213713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:58 compute-0 python3.9[213715]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:58 compute-0 sudo[213713]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:58 compute-0 stoic_zhukovsky[213505]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:48:58 compute-0 stoic_zhukovsky[213505]: --> relative data size: 1.0
Nov 26 12:48:58 compute-0 stoic_zhukovsky[213505]: --> All data devices are unavailable
Nov 26 12:48:58 compute-0 systemd[1]: libpod-78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b.scope: Deactivated successfully.
Nov 26 12:48:58 compute-0 conmon[213505]: conmon 78aa78727c6bddc5be61 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b.scope/container/memory.events
Nov 26 12:48:58 compute-0 podman[213467]: 2025-11-26 12:48:58.517726614 +0000 UTC m=+0.967205547 container died 78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:48:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bb53554b1fc749f9c61932d012341f6bedfb2da96d1601f6de520198d691b25-merged.mount: Deactivated successfully.
Nov 26 12:48:58 compute-0 podman[213467]: 2025-11-26 12:48:58.551700952 +0000 UTC m=+1.001179886 container remove 78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:48:58 compute-0 systemd[1]: libpod-conmon-78aa78727c6bddc5be612010f5fb563b885f4d639fd20adda2a50ed33dcf2a7b.scope: Deactivated successfully.
Nov 26 12:48:58 compute-0 sudo[213300]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:58 compute-0 sudo[213839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:48:58 compute-0 sudo[213839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:48:58 compute-0 sudo[213839]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:58 compute-0 sudo[213886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:48:58 compute-0 sudo[213886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:48:58 compute-0 sudo[213886]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:58 compute-0 sudo[213954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khfmnecacyunlrtazxygscmtvvfsqpsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161338.485465-1277-220207773384445/AnsiballZ_stat.py'
Nov 26 12:48:58 compute-0 sudo[213954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:58 compute-0 sudo[213947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:48:58 compute-0 sudo[213947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:48:58 compute-0 sudo[213947]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:58 compute-0 sudo[213978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:48:58 compute-0 sudo[213978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:48:58 compute-0 python3.9[213972]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:58 compute-0 sudo[213954]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:59 compute-0 podman[214081]: 2025-11-26 12:48:59.033850969 +0000 UTC m=+0.033565770 container create 397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_feistel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 12:48:59 compute-0 systemd[1]: Started libpod-conmon-397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee.scope.
Nov 26 12:48:59 compute-0 sudo[214119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jscqpnpfhxchjrozpjblhwduxobchvht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161338.485465-1277-220207773384445/AnsiballZ_file.py'
Nov 26 12:48:59 compute-0 sudo[214119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:48:59 compute-0 podman[214081]: 2025-11-26 12:48:59.087697565 +0000 UTC m=+0.087412386 container init 397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 12:48:59 compute-0 podman[214081]: 2025-11-26 12:48:59.092519667 +0000 UTC m=+0.092234469 container start 397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 12:48:59 compute-0 podman[214081]: 2025-11-26 12:48:59.093809215 +0000 UTC m=+0.093524017 container attach 397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_feistel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 12:48:59 compute-0 nervous_feistel[214124]: 167 167
Nov 26 12:48:59 compute-0 systemd[1]: libpod-397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee.scope: Deactivated successfully.
Nov 26 12:48:59 compute-0 podman[214081]: 2025-11-26 12:48:59.096530181 +0000 UTC m=+0.096244982 container died 397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_feistel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:48:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-51f9e8a2786a9eb5bccdc8b2d69d52b0f58f2933cf3d89e088de9d5ca19ef34e-merged.mount: Deactivated successfully.
Nov 26 12:48:59 compute-0 podman[214081]: 2025-11-26 12:48:59.117695304 +0000 UTC m=+0.117410106 container remove 397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_feistel, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:48:59 compute-0 podman[214081]: 2025-11-26 12:48:59.021148675 +0000 UTC m=+0.020863496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:48:59 compute-0 systemd[1]: libpod-conmon-397661dd663a9b0bd369b936bb65a9ff77296f46d179bc6b03a491417b7c59ee.scope: Deactivated successfully.
Nov 26 12:48:59 compute-0 ceph-mon[74966]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:48:59 compute-0 python3.9[214126]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:48:59 compute-0 podman[214147]: 2025-11-26 12:48:59.242146058 +0000 UTC m=+0.030759124 container create 293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_driscoll, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 12:48:59 compute-0 sudo[214119]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:59 compute-0 systemd[1]: Started libpod-conmon-293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e.scope.
Nov 26 12:48:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf1701db1c0121820d87678b2f20557a91966b67feb7bff13c96032ef424d55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf1701db1c0121820d87678b2f20557a91966b67feb7bff13c96032ef424d55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf1701db1c0121820d87678b2f20557a91966b67feb7bff13c96032ef424d55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf1701db1c0121820d87678b2f20557a91966b67feb7bff13c96032ef424d55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:48:59 compute-0 podman[214147]: 2025-11-26 12:48:59.311261433 +0000 UTC m=+0.099874498 container init 293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_driscoll, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:48:59 compute-0 podman[214147]: 2025-11-26 12:48:59.316342583 +0000 UTC m=+0.104955648 container start 293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:48:59 compute-0 podman[214147]: 2025-11-26 12:48:59.317995004 +0000 UTC m=+0.106608089 container attach 293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 12:48:59 compute-0 podman[214147]: 2025-11-26 12:48:59.229606291 +0000 UTC m=+0.018219376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:48:59 compute-0 sudo[214314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtqgqzeljxtasrbgqtjfmtxzjzjxmknv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161339.3682573-1289-33864428528980/AnsiballZ_stat.py'
Nov 26 12:48:59 compute-0 sudo[214314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:59 compute-0 python3.9[214316]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:48:59 compute-0 sudo[214314]: pam_unix(sudo:session): session closed for user root
Nov 26 12:48:59 compute-0 sudo[214394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpiafmjectkrhglzenipbobahutkhcag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161339.3682573-1289-33864428528980/AnsiballZ_file.py'
Nov 26 12:48:59 compute-0 sudo[214394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]: {
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:     "0": [
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:         {
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "devices": [
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "/dev/loop3"
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             ],
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "lv_name": "ceph_lv0",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "lv_size": "21470642176",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "name": "ceph_lv0",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "tags": {
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.cluster_name": "ceph",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.crush_device_class": "",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.encrypted": "0",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.osd_id": "0",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.type": "block",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.vdo": "0"
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             },
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "type": "block",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "vg_name": "ceph_vg0"
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:         }
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:     ],
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:     "1": [
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:         {
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "devices": [
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "/dev/loop4"
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             ],
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "lv_name": "ceph_lv1",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "lv_size": "21470642176",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "name": "ceph_lv1",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "tags": {
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.cluster_name": "ceph",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.crush_device_class": "",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.encrypted": "0",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.osd_id": "1",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.type": "block",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.vdo": "0"
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             },
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "type": "block",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "vg_name": "ceph_vg1"
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:         }
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:     ],
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:     "2": [
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:         {
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "devices": [
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "/dev/loop5"
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             ],
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "lv_name": "ceph_lv2",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "lv_size": "21470642176",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "name": "ceph_lv2",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "tags": {
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.cluster_name": "ceph",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.crush_device_class": "",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.encrypted": "0",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.osd_id": "2",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.type": "block",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:                 "ceph.vdo": "0"
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             },
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "type": "block",
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:             "vg_name": "ceph_vg2"
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:         }
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]:     ]
Nov 26 12:48:59 compute-0 amazing_driscoll[214172]: }
Nov 26 12:48:59 compute-0 systemd[1]: libpod-293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e.scope: Deactivated successfully.
Nov 26 12:48:59 compute-0 podman[214147]: 2025-11-26 12:48:59.972524276 +0000 UTC m=+0.761137342 container died 293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:48:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-eaf1701db1c0121820d87678b2f20557a91966b67feb7bff13c96032ef424d55-merged.mount: Deactivated successfully.
Nov 26 12:49:00 compute-0 podman[214147]: 2025-11-26 12:49:00.009597952 +0000 UTC m=+0.798211018 container remove 293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_driscoll, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 12:49:00 compute-0 systemd[1]: libpod-conmon-293d5650866f29ead11f5053a2d8fe9dfca089576c141eb1cb88019242a4cd6e.scope: Deactivated successfully.
Nov 26 12:49:00 compute-0 sudo[213978]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:00 compute-0 python3.9[214396]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:00 compute-0 sudo[214409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:49:00 compute-0 sudo[214409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:49:00 compute-0 sudo[214409]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:00 compute-0 sudo[214394]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:00 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:00 compute-0 sudo[214434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:49:00 compute-0 sudo[214434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:49:00 compute-0 sudo[214434]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:00 compute-0 sudo[214483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:49:00 compute-0 sudo[214483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:49:00 compute-0 sudo[214483]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:00 compute-0 sudo[214511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:49:00 compute-0 sudo[214511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:49:00 compute-0 sudo[214681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeiknfuehexguqzunzgtwvnddrsylken ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161340.2087917-1301-260974276694223/AnsiballZ_stat.py'
Nov 26 12:49:00 compute-0 sudo[214681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:00 compute-0 podman[214693]: 2025-11-26 12:49:00.481405479 +0000 UTC m=+0.030697858 container create fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:49:00 compute-0 systemd[1]: Started libpod-conmon-fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6.scope.
Nov 26 12:49:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:49:00 compute-0 podman[214693]: 2025-11-26 12:49:00.540447892 +0000 UTC m=+0.089740280 container init fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:49:00 compute-0 podman[214693]: 2025-11-26 12:49:00.54514582 +0000 UTC m=+0.094438198 container start fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:49:00 compute-0 podman[214693]: 2025-11-26 12:49:00.548741642 +0000 UTC m=+0.098034041 container attach fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 12:49:00 compute-0 dreamy_carver[214707]: 167 167
Nov 26 12:49:00 compute-0 systemd[1]: libpod-fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6.scope: Deactivated successfully.
Nov 26 12:49:00 compute-0 conmon[214707]: conmon fe9ccb92ad054c02a1c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6.scope/container/memory.events
Nov 26 12:49:00 compute-0 podman[214693]: 2025-11-26 12:49:00.550536032 +0000 UTC m=+0.099828410 container died fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 12:49:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9fb4df5438f7582e0420d0042ad3eb4108c4b35e62ed45f88e4d72ad72de5f5-merged.mount: Deactivated successfully.
Nov 26 12:49:00 compute-0 podman[214693]: 2025-11-26 12:49:00.469903567 +0000 UTC m=+0.019195964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:49:00 compute-0 podman[214693]: 2025-11-26 12:49:00.574177389 +0000 UTC m=+0.123469768 container remove fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:49:00 compute-0 systemd[1]: libpod-conmon-fe9ccb92ad054c02a1c143b8e33fe4e39a8812dae740140bcdfe886806edfaa6.scope: Deactivated successfully.
Nov 26 12:49:00 compute-0 python3.9[214688]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:00 compute-0 sudo[214681]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:00 compute-0 podman[214732]: 2025-11-26 12:49:00.701899415 +0000 UTC m=+0.032875621 container create e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 12:49:00 compute-0 systemd[1]: Started libpod-conmon-e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802.scope.
Nov 26 12:49:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b05d86833688eb76ee363406a176588649d09e671b0e7951c0ea346a0575532/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b05d86833688eb76ee363406a176588649d09e671b0e7951c0ea346a0575532/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b05d86833688eb76ee363406a176588649d09e671b0e7951c0ea346a0575532/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b05d86833688eb76ee363406a176588649d09e671b0e7951c0ea346a0575532/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:49:00 compute-0 podman[214732]: 2025-11-26 12:49:00.762002495 +0000 UTC m=+0.092978710 container init e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:49:00 compute-0 podman[214732]: 2025-11-26 12:49:00.768868106 +0000 UTC m=+0.099844312 container start e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:49:00 compute-0 podman[214732]: 2025-11-26 12:49:00.770524155 +0000 UTC m=+0.101500361 container attach e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:49:00 compute-0 podman[214732]: 2025-11-26 12:49:00.688615817 +0000 UTC m=+0.019592042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:49:00 compute-0 sudo[214822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypkxbvgktxzburnaeayevetidfofgfor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161340.2087917-1301-260974276694223/AnsiballZ_file.py'
Nov 26 12:49:00 compute-0 sudo[214822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:00 compute-0 python3.9[214824]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:00 compute-0 sudo[214822]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:49:01 compute-0 ceph-mon[74966]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:01 compute-0 sudo[214976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oelsvpfwfsnbgvhqftyigmzpqoldcdhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161341.103489-1313-150961457938639/AnsiballZ_stat.py'
Nov 26 12:49:01 compute-0 sudo[214976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:01 compute-0 python3.9[214978]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:01 compute-0 sudo[214976]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:01 compute-0 pensive_margulis[214781]: {
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:         "osd_id": 1,
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:         "type": "bluestore"
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:     },
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:         "osd_id": 2,
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:         "type": "bluestore"
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:     },
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:         "osd_id": 0,
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:         "type": "bluestore"
Nov 26 12:49:01 compute-0 pensive_margulis[214781]:     }
Nov 26 12:49:01 compute-0 pensive_margulis[214781]: }
Nov 26 12:49:01 compute-0 systemd[1]: libpod-e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802.scope: Deactivated successfully.
Nov 26 12:49:01 compute-0 podman[214732]: 2025-11-26 12:49:01.573909918 +0000 UTC m=+0.904886144 container died e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:49:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b05d86833688eb76ee363406a176588649d09e671b0e7951c0ea346a0575532-merged.mount: Deactivated successfully.
Nov 26 12:49:01 compute-0 podman[214732]: 2025-11-26 12:49:01.606518294 +0000 UTC m=+0.937494499 container remove e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:49:01 compute-0 systemd[1]: libpod-conmon-e13b5ed9a6674c8114cecc5cb0b8df17ecf365d3d48aaf6ea13bc9f571b8f802.scope: Deactivated successfully.
Nov 26 12:49:01 compute-0 sudo[214511]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:49:01 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:49:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:49:01 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:49:01 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 243bd9ad-061c-4ece-87e5-2db84f01aca2 does not exist
Nov 26 12:49:01 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 36444402-dd88-4af0-ab34-5d73ba9349d4 does not exist
Nov 26 12:49:01 compute-0 sudo[215066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:49:01 compute-0 sudo[215066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:49:01 compute-0 sudo[215066]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:01 compute-0 sudo[215113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:49:01 compute-0 sudo[215113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:49:01 compute-0 sudo[215113]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:49:01.722 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:49:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:49:01.723 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:49:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:49:01.724 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:49:01 compute-0 sudo[215188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijudtcroducpdhdzhthagnelnjtyxkru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161341.103489-1313-150961457938639/AnsiballZ_copy.py'
Nov 26 12:49:01 compute-0 sudo[215188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:01 compute-0 python3.9[215190]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764161341.103489-1313-150961457938639/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:01 compute-0 sudo[215188]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:02 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:02 compute-0 sudo[215340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsoxzjjzbdfvosnuqqmivwzajszygapj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161342.0468874-1328-117360152591669/AnsiballZ_file.py'
Nov 26 12:49:02 compute-0 sudo[215340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:02 compute-0 podman[215342]: 2025-11-26 12:49:02.261419796 +0000 UTC m=+0.039753985 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 26 12:49:02 compute-0 python3.9[215343]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:02 compute-0 sudo[215340]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:02 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:49:02 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:49:02 compute-0 ceph-mon[74966]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:02 compute-0 sudo[215507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odjypjkzvbkmklgblufyfcamqlfndnft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161342.493346-1336-133878193390872/AnsiballZ_command.py'
Nov 26 12:49:02 compute-0 sudo[215507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:02 compute-0 python3.9[215509]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:49:02 compute-0 sudo[215507]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:03 compute-0 sudo[215662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clnjoirzmzannhtkjevhxpmvesbdnlpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161342.9725301-1344-70953814576272/AnsiballZ_blockinfile.py'
Nov 26 12:49:03 compute-0 sudo[215662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:03 compute-0 python3.9[215664]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:03 compute-0 sudo[215662]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:03 compute-0 sudo[215814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdtrowtgkfxwwebgyczxdmimehevfbjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161343.6445653-1353-141822252731147/AnsiballZ_command.py'
Nov 26 12:49:03 compute-0 sudo[215814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:03 compute-0 python3.9[215816]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:49:03 compute-0 sudo[215814]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:04 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:04 compute-0 sudo[215967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jixljtsoxppmzkdigtelhdqgerhjbean ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161344.1068807-1361-68079286837923/AnsiballZ_stat.py'
Nov 26 12:49:04 compute-0 sudo[215967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:04 compute-0 python3.9[215969]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:49:04 compute-0 sudo[215967]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:04 compute-0 sudo[216121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emfhjjdkbnqwumxuxuilgfpdmhbjortu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161344.5710845-1369-130109180566458/AnsiballZ_command.py'
Nov 26 12:49:04 compute-0 sudo[216121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:04 compute-0 python3.9[216123]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:49:04 compute-0 sudo[216121]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:05 compute-0 ceph-mon[74966]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:05 compute-0 sudo[216276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjpaswnqxuthlaedbhkzessnjespgukw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161345.049148-1377-116248131747703/AnsiballZ_file.py'
Nov 26 12:49:05 compute-0 sudo[216276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:05 compute-0 python3.9[216278]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:05 compute-0 sudo[216276]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:05 compute-0 sudo[216428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqrmfczirhvmdlhwsvhwbhwwzswfrxjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161345.502659-1385-108800985136086/AnsiballZ_stat.py'
Nov 26 12:49:05 compute-0 sudo[216428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:05 compute-0 python3.9[216430]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:05 compute-0 sudo[216428]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:49:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:49:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:49:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:49:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:49:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:49:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:49:06 compute-0 sudo[216551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrpsxdqwzzggjnibgqlfdemyuubotbpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161345.502659-1385-108800985136086/AnsiballZ_copy.py'
Nov 26 12:49:06 compute-0 sudo[216551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:06 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:06 compute-0 python3.9[216553]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161345.502659-1385-108800985136086/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:06 compute-0 sudo[216551]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:06 compute-0 sudo[216703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqaqkaugxhxgkqnhmfqtrsjbasvulqxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161346.3463955-1400-108563032484030/AnsiballZ_stat.py'
Nov 26 12:49:06 compute-0 sudo[216703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:06 compute-0 python3.9[216705]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:06 compute-0 sudo[216703]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:07 compute-0 sudo[216826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmthhpqecogppdfwesptuxvccdsbvjxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161346.3463955-1400-108563032484030/AnsiballZ_copy.py'
Nov 26 12:49:07 compute-0 sudo[216826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:07 compute-0 ceph-mon[74966]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:07 compute-0 python3.9[216828]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161346.3463955-1400-108563032484030/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:07 compute-0 sudo[216826]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:07 compute-0 sudo[216978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oprwgylrmfqrtoharxujclynggecmgkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161347.3039613-1415-173157662984290/AnsiballZ_stat.py'
Nov 26 12:49:07 compute-0 sudo[216978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:07 compute-0 python3.9[216980]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:07 compute-0 sudo[216978]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:07 compute-0 sudo[217101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmpdizqxvktlqgrxxhxirwsichnfgdcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161347.3039613-1415-173157662984290/AnsiballZ_copy.py'
Nov 26 12:49:07 compute-0 sudo[217101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:08 compute-0 python3.9[217103]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161347.3039613-1415-173157662984290/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:08 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:08 compute-0 sudo[217101]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:08 compute-0 sudo[217253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obbjvcesictvjmwmkyhanvzcpcldwtzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161348.2268093-1430-274772733518869/AnsiballZ_systemd.py'
Nov 26 12:49:08 compute-0 sudo[217253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:08 compute-0 python3.9[217255]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:49:08 compute-0 systemd[1]: Reloading.
Nov 26 12:49:08 compute-0 systemd-rc-local-generator[217280]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:49:08 compute-0 systemd-sysv-generator[217283]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:49:08 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 26 12:49:08 compute-0 sudo[217253]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:09 compute-0 ceph-mon[74966]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:09 compute-0 sudo[217443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbgvbkrssqelaudsuvjzfztsjpmqngkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161349.0978239-1438-49777081959565/AnsiballZ_systemd.py'
Nov 26 12:49:09 compute-0 sudo[217443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:09 compute-0 python3.9[217445]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 26 12:49:09 compute-0 systemd[1]: Reloading.
Nov 26 12:49:09 compute-0 systemd-rc-local-generator[217466]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:49:09 compute-0 systemd-sysv-generator[217469]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:49:09 compute-0 systemd[1]: Reloading.
Nov 26 12:49:09 compute-0 systemd-rc-local-generator[217505]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:49:09 compute-0 systemd-sysv-generator[217508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:49:10 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Nov 26 12:49:10 compute-0 sudo[217443]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:10 compute-0 sshd-session[159168]: Connection closed by 192.168.122.30 port 46650
Nov 26 12:49:10 compute-0 sshd-session[159165]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:49:10 compute-0 systemd-logind[777]: Session 48 logged out. Waiting for processes to exit.
Nov 26 12:49:10 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Nov 26 12:49:10 compute-0 systemd[1]: session-48.scope: Consumed 2min 32.767s CPU time.
Nov 26 12:49:10 compute-0 systemd-logind[777]: Removed session 48.
Nov 26 12:49:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:49:11 compute-0 ceph-mon[74966]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Nov 26 12:49:12 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 12:49:12 compute-0 podman[217542]: 2025-11-26 12:49:12.888033996 +0000 UTC m=+0.056345032 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 12:49:13 compute-0 ceph-mon[74966]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 12:49:14 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 12:49:15 compute-0 ceph-mon[74966]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 12:49:15 compute-0 sshd-session[217565]: Accepted publickey for zuul from 192.168.122.30 port 48448 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:49:15 compute-0 systemd-logind[777]: New session 49 of user zuul.
Nov 26 12:49:15 compute-0 systemd[1]: Started Session 49 of User zuul.
Nov 26 12:49:15 compute-0 sshd-session[217565]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:49:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:49:16 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 12:49:16 compute-0 python3.9[217718]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:49:17 compute-0 ceph-mon[74966]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 12:49:17 compute-0 python3.9[217872]: ansible-ansible.builtin.service_facts Invoked
Nov 26 12:49:17 compute-0 network[217889]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 12:49:17 compute-0 network[217890]: 'network-scripts' will be removed from distribution in near future.
Nov 26 12:49:17 compute-0 network[217891]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 12:49:18 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 12:49:19 compute-0 ceph-mon[74966]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 12:49:19 compute-0 sudo[218161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdoeudmgyvuhsvrfdmqqwxvwsuykdduw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161359.6862147-47-200978782377499/AnsiballZ_setup.py'
Nov 26 12:49:19 compute-0 sudo[218161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:20 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 12:49:20 compute-0 python3.9[218163]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 12:49:20 compute-0 sudo[218161]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:20 compute-0 sudo[218245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojvmohmxjjwqaidmjnfolbjpivdyumqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161359.6862147-47-200978782377499/AnsiballZ_dnf.py'
Nov 26 12:49:20 compute-0 sudo[218245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:20 compute-0 python3.9[218247]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:49:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:49:21 compute-0 ceph-mon[74966]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 12:49:22 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Nov 26 12:49:23 compute-0 ceph-mon[74966]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Nov 26 12:49:24 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:24 compute-0 sudo[218245]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:25 compute-0 ceph-mon[74966]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:25 compute-0 sudo[218398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbxnushyeysuhnpnbuereotfsfcxfsso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161365.1151195-59-270792525925674/AnsiballZ_stat.py'
Nov 26 12:49:25 compute-0 sudo[218398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:25 compute-0 python3.9[218400]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:49:25 compute-0 sudo[218398]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:26 compute-0 sudo[218550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbutnzuhdxqzkxulpeqnhetnfqrelicj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161365.7217565-69-106437228392070/AnsiballZ_command.py'
Nov 26 12:49:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:49:26 compute-0 sudo[218550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:26 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:26 compute-0 python3.9[218552]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:49:26 compute-0 sudo[218550]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:26 compute-0 sudo[218703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dspvqtuelaxpqardaewxkrkfazhmctad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161366.4047587-79-116342458302573/AnsiballZ_stat.py'
Nov 26 12:49:26 compute-0 sudo[218703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:26 compute-0 python3.9[218705]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:49:26 compute-0 sudo[218703]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:27 compute-0 sudo[218855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycmmbyoangaabvyxggdgtfxhxiqscmey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161366.8961043-87-209691906063703/AnsiballZ_command.py'
Nov 26 12:49:27 compute-0 sudo[218855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:27 compute-0 ceph-mon[74966]: pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:27 compute-0 python3.9[218857]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:49:27 compute-0 sudo[218855]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:27 compute-0 sudo[219008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgcfrmwsgpzhgloacjlwqrcdnmagnpeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161367.3617265-95-40634847495217/AnsiballZ_stat.py'
Nov 26 12:49:27 compute-0 sudo[219008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:27 compute-0 python3.9[219010]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:27 compute-0 sudo[219008]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:28 compute-0 sudo[219131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geedkjckyfxmtviblycsoyrrlmjgvidt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161367.3617265-95-40634847495217/AnsiballZ_copy.py'
Nov 26 12:49:28 compute-0 sudo[219131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:28 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:28 compute-0 python3.9[219133]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161367.3617265-95-40634847495217/.source.iscsi _original_basename=.5y93t886 follow=False checksum=37953765cb33ad82de40a0a37e146a9224f7da4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:28 compute-0 sudo[219131]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:28 compute-0 sudo[219283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osbawmhfnnrxbjwvnvgbsuffyyunbjtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161368.3323033-110-47468848499464/AnsiballZ_file.py'
Nov 26 12:49:28 compute-0 sudo[219283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:28 compute-0 python3.9[219285]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:28 compute-0 sudo[219283]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:29 compute-0 ceph-mon[74966]: pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:29 compute-0 sudo[219435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbrrmxvxruswlmqnaruhgdbnyjszsctw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161368.928916-118-140994168452937/AnsiballZ_lineinfile.py'
Nov 26 12:49:29 compute-0 sudo[219435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:29 compute-0 python3.9[219437]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:29 compute-0 sudo[219435]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:30 compute-0 sudo[219587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fctjgtacnhiajeohqivoeajhdcmykqor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161369.5541365-127-166162866704788/AnsiballZ_systemd_service.py'
Nov 26 12:49:30 compute-0 sudo[219587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:30 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:30 compute-0 python3.9[219589]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:49:30 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 26 12:49:30 compute-0 sudo[219587]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:30 compute-0 sudo[219743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgindpqjnnpxnstnjkipabfjcgdbqjzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161370.4407082-135-219083842035439/AnsiballZ_systemd_service.py'
Nov 26 12:49:30 compute-0 sudo[219743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:30 compute-0 python3.9[219745]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:49:30 compute-0 systemd[1]: Reloading.
Nov 26 12:49:30 compute-0 systemd-rc-local-generator[219767]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:49:30 compute-0 systemd-sysv-generator[219771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:49:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:49:31 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 26 12:49:31 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 26 12:49:31 compute-0 ceph-mon[74966]: pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:31 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 26 12:49:31 compute-0 systemd[1]: Started Open-iSCSI.
Nov 26 12:49:31 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 26 12:49:31 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 26 12:49:31 compute-0 sudo[219743]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:31 compute-0 sudo[219944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlgrgqgvhqrmtfmldwtehldsjmytqkrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161371.5170867-146-104424068085661/AnsiballZ_service_facts.py'
Nov 26 12:49:31 compute-0 sudo[219944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:31 compute-0 python3.9[219946]: ansible-ansible.builtin.service_facts Invoked
Nov 26 12:49:31 compute-0 network[219963]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 12:49:31 compute-0 network[219964]: 'network-scripts' will be removed from distribution in near future.
Nov 26 12:49:31 compute-0 network[219965]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 12:49:32 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:32 compute-0 podman[219972]: 2025-11-26 12:49:32.606545737 +0000 UTC m=+0.039973639 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Nov 26 12:49:33 compute-0 ceph-mon[74966]: pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:33 compute-0 sudo[219944]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:34 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:34 compute-0 sudo[220251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncgplvunqfcshoxiqlxydfgcvqynlruo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161374.1336613-156-33907596367405/AnsiballZ_file.py'
Nov 26 12:49:34 compute-0 sudo[220251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:34 compute-0 python3.9[220253]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 12:49:34 compute-0 sudo[220251]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:34 compute-0 sudo[220403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joosmerzrdyrbinzemxswqeaicropfxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161374.600703-164-269649535229522/AnsiballZ_modprobe.py'
Nov 26 12:49:34 compute-0 sudo[220403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:35 compute-0 python3.9[220405]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 26 12:49:35 compute-0 sudo[220403]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:35 compute-0 ceph-mon[74966]: pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:35 compute-0 sudo[220559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzyrydirwrjykkkicinvidneiyjmamlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161375.2105477-172-14801099994985/AnsiballZ_stat.py'
Nov 26 12:49:35 compute-0 sudo[220559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:35 compute-0 python3.9[220561]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:35 compute-0 sudo[220559]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:35 compute-0 sudo[220682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajvnqfbvnkbzkbmqhicjprnbgvksctao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161375.2105477-172-14801099994985/AnsiballZ_copy.py'
Nov 26 12:49:35 compute-0 sudo[220682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:49:35
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.data']
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:49:35 compute-0 python3.9[220684]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161375.2105477-172-14801099994985/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:35 compute-0 sudo[220682]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:49:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:49:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:49:36 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:36 compute-0 sudo[220834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hosywlcnfsnaqmeeifxmqgabkekbjxfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161376.0782826-188-118805050639424/AnsiballZ_lineinfile.py'
Nov 26 12:49:36 compute-0 sudo[220834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:36 compute-0 python3.9[220836]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:36 compute-0 sudo[220834]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:36 compute-0 sudo[220986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jofzpglygnyeanjitjyxwiijyzorxrxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161376.5265467-196-131495579077698/AnsiballZ_systemd.py'
Nov 26 12:49:36 compute-0 sudo[220986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:37 compute-0 python3.9[220988]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:49:37 compute-0 ceph-mon[74966]: pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:37 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 26 12:49:37 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 26 12:49:37 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 26 12:49:37 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 26 12:49:37 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 26 12:49:37 compute-0 sudo[220986]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:37 compute-0 sudo[221142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bedpfntvsonioliotybsrrigaeopspcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161377.3855324-204-143499320838268/AnsiballZ_file.py'
Nov 26 12:49:37 compute-0 sudo[221142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:37 compute-0 python3.9[221144]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:49:37 compute-0 sudo[221142]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:38 compute-0 sudo[221294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucgmrcqifxewgnewasaeuxileeubnhlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161377.8841264-213-84525998044028/AnsiballZ_stat.py'
Nov 26 12:49:38 compute-0 sudo[221294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:38 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:38 compute-0 python3.9[221296]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:49:38 compute-0 sudo[221294]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:38 compute-0 sudo[221446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vszbgpoepzkeybafluxympljmfcnvlkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161378.3712146-222-146486218253022/AnsiballZ_stat.py'
Nov 26 12:49:38 compute-0 sudo[221446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:38 compute-0 python3.9[221448]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:49:38 compute-0 sudo[221446]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:38 compute-0 sudo[221598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwonocrhspiwpkrerfjjhimmyfwpxkag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161378.8255506-230-194717949847816/AnsiballZ_stat.py'
Nov 26 12:49:38 compute-0 sudo[221598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:39 compute-0 python3.9[221600]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:39 compute-0 sudo[221598]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:39 compute-0 ceph-mon[74966]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:39 compute-0 sudo[221721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjaoeealdmgwnhwddksltjdytqevvsvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161378.8255506-230-194717949847816/AnsiballZ_copy.py'
Nov 26 12:49:39 compute-0 sudo[221721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:39 compute-0 python3.9[221723]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161378.8255506-230-194717949847816/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:39 compute-0 sudo[221721]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:39 compute-0 sudo[221873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdwjpxnrfwnoqcsfczuzqvxhrqsihhae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161379.6166852-245-111542810361859/AnsiballZ_command.py'
Nov 26 12:49:39 compute-0 sudo[221873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:39 compute-0 python3.9[221875]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:49:39 compute-0 sudo[221873]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:40 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:40 compute-0 sudo[222026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lytkpwfuniquejpgshrfqvxyqjkxxjmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161380.047653-253-112854525898380/AnsiballZ_lineinfile.py'
Nov 26 12:49:40 compute-0 sudo[222026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:40 compute-0 python3.9[222028]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:40 compute-0 sudo[222026]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:40 compute-0 sudo[222178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlekfdjvtnkvkmekzdigdxxudzxeshgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161380.5112364-261-230319426331068/AnsiballZ_replace.py'
Nov 26 12:49:40 compute-0 sudo[222178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:41 compute-0 python3.9[222180]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:41 compute-0 sudo[222178]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:49:41 compute-0 ceph-mon[74966]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:41 compute-0 sudo[222330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izzdensbmnqwgkdglkdwnmqbrjplzozl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161381.138633-269-26697229170907/AnsiballZ_replace.py'
Nov 26 12:49:41 compute-0 sudo[222330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:41 compute-0 python3.9[222332]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:41 compute-0 sudo[222330]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:41 compute-0 sudo[222482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-effsnodzihzdnfwhgxrjyencjfcxlbpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161381.619936-278-255721554020663/AnsiballZ_lineinfile.py'
Nov 26 12:49:41 compute-0 sudo[222482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:41 compute-0 python3.9[222484]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:41 compute-0 sudo[222482]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:42 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:42 compute-0 sudo[222634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxrrmwgdfzaqwytsejrtzsymrwnmwkwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161382.067781-278-230319806477598/AnsiballZ_lineinfile.py'
Nov 26 12:49:42 compute-0 sudo[222634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:42 compute-0 python3.9[222636]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:42 compute-0 sudo[222634]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:42 compute-0 sudo[222786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywbcphxnyiqexvqrfkzcxeaztjapuwpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161382.4985404-278-56029395028701/AnsiballZ_lineinfile.py'
Nov 26 12:49:42 compute-0 sudo[222786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:42 compute-0 python3.9[222788]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:42 compute-0 sudo[222786]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:43 compute-0 sudo[222947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkoskpwpogbbljtopgraueerfiadetlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161382.9214911-278-177657244281558/AnsiballZ_lineinfile.py'
Nov 26 12:49:43 compute-0 sudo[222947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:43 compute-0 podman[222912]: 2025-11-26 12:49:43.140372366 +0000 UTC m=+0.063478840 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller)
Nov 26 12:49:43 compute-0 ceph-mon[74966]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:43 compute-0 python3.9[222957]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:43 compute-0 sudo[222947]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:43 compute-0 sudo[223113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwfevanjggevrnrqlzqjzthdcqlcjsrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161383.3981402-307-54761173541643/AnsiballZ_stat.py'
Nov 26 12:49:43 compute-0 sudo[223113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:43 compute-0 python3.9[223115]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:49:43 compute-0 sudo[223113]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:44 compute-0 sudo[223267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkvxipqmpnjfceboabisikljzqfmukgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161383.8675113-315-185326622795051/AnsiballZ_file.py'
Nov 26 12:49:44 compute-0 sudo[223267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:44 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:44 compute-0 python3.9[223269]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:44 compute-0 sudo[223267]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:44 compute-0 sudo[223419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqtdrqrasizfrshtpyaazgnlsewcnghb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161384.4048305-324-123374290404807/AnsiballZ_file.py'
Nov 26 12:49:44 compute-0 sudo[223419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:44 compute-0 python3.9[223421]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:49:44 compute-0 sudo[223419]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:45 compute-0 sudo[223571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdhwbhaybnoyklcjotrlqsbxecdpslga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161384.8675928-332-9434111111062/AnsiballZ_stat.py'
Nov 26 12:49:45 compute-0 sudo[223571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:49:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:49:45 compute-0 python3.9[223573]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:45 compute-0 ceph-mon[74966]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:45 compute-0 sudo[223571]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:45 compute-0 sudo[223649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kakyrbphysxamvwynfjzvazhesajkstj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161384.8675928-332-9434111111062/AnsiballZ_file.py'
Nov 26 12:49:45 compute-0 sudo[223649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:45 compute-0 python3.9[223651]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:49:45 compute-0 sudo[223649]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:45 compute-0 sudo[223801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzolkbdiaatawwkxvxzkqdgoakbuvhuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161385.6711695-332-84621878866382/AnsiballZ_stat.py'
Nov 26 12:49:45 compute-0 sudo[223801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:49:46 compute-0 python3.9[223803]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:46 compute-0 sudo[223801]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:46 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:46 compute-0 sudo[223879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjdpybxmbnaqdvyhdhiimemogganyjnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161385.6711695-332-84621878866382/AnsiballZ_file.py'
Nov 26 12:49:46 compute-0 sudo[223879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:46 compute-0 python3.9[223881]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:49:46 compute-0 sudo[223879]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:46 compute-0 sudo[224031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bldpeuhvlhdbkrspeydcipvdlbocmpdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161386.5007617-355-53582311309409/AnsiballZ_file.py'
Nov 26 12:49:46 compute-0 sudo[224031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:46 compute-0 python3.9[224033]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:46 compute-0 sudo[224031]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:47 compute-0 sudo[224183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jguniqzxfpyfvpalkxpdictdprdabtve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161386.960709-363-63979257376418/AnsiballZ_stat.py'
Nov 26 12:49:47 compute-0 sudo[224183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:47 compute-0 ceph-mon[74966]: pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:47 compute-0 python3.9[224185]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:47 compute-0 sudo[224183]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:47 compute-0 sudo[224261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdzmckcsceosjrouomtmuwzevlqjswqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161386.960709-363-63979257376418/AnsiballZ_file.py'
Nov 26 12:49:47 compute-0 sudo[224261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:47 compute-0 python3.9[224263]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:47 compute-0 sudo[224261]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:47 compute-0 sudo[224413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqzdxqhsbflvwdnrvugmilrnhejlxuuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161387.7386234-375-186436057194997/AnsiballZ_stat.py'
Nov 26 12:49:47 compute-0 sudo[224413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:48 compute-0 python3.9[224415]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:48 compute-0 sudo[224413]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:48 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:48 compute-0 sudo[224491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdjgmdrxjpkivxgvfxdlfuvikdanxogt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161387.7386234-375-186436057194997/AnsiballZ_file.py'
Nov 26 12:49:48 compute-0 sudo[224491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:48 compute-0 python3.9[224493]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:48 compute-0 sudo[224491]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:48 compute-0 sudo[224643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jutwczopvpmgacfklzlqckqqbhfjoyle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161388.5056772-387-242531206553443/AnsiballZ_systemd.py'
Nov 26 12:49:48 compute-0 sudo[224643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:48 compute-0 python3.9[224645]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:49:48 compute-0 systemd[1]: Reloading.
Nov 26 12:49:49 compute-0 systemd-rc-local-generator[224669]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:49:49 compute-0 systemd-sysv-generator[224675]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:49:49 compute-0 ceph-mon[74966]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:49 compute-0 sudo[224643]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:49 compute-0 sudo[224831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msucdxwobfceddukksixbvtqkadzaosi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161389.3611255-395-161859982850758/AnsiballZ_stat.py'
Nov 26 12:49:49 compute-0 sudo[224831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:49 compute-0 python3.9[224833]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:49 compute-0 sudo[224831]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:49 compute-0 sudo[224909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrfgyalndrcrqsxdhdczinrsutscxbsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161389.3611255-395-161859982850758/AnsiballZ_file.py'
Nov 26 12:49:49 compute-0 sudo[224909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:50 compute-0 python3.9[224911]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:50 compute-0 sudo[224909]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:50 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:50 compute-0 sudo[225061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzebkemdbnjqoskyvfxbzvjhbldsspgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161390.1412206-407-156433487914592/AnsiballZ_stat.py'
Nov 26 12:49:50 compute-0 sudo[225061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:50 compute-0 python3.9[225063]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:50 compute-0 sudo[225061]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:50 compute-0 sudo[225139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdzpdxxyilkiapyzyibepfnmrqrmvlii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161390.1412206-407-156433487914592/AnsiballZ_file.py'
Nov 26 12:49:50 compute-0 sudo[225139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:50 compute-0 python3.9[225141]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:50 compute-0 sudo[225139]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:49:51 compute-0 sudo[225291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdyfzpjaobnfymfxewjwydqbmldqujbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161390.895686-419-251448056554934/AnsiballZ_systemd.py'
Nov 26 12:49:51 compute-0 sudo[225291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:51 compute-0 ceph-mon[74966]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:51 compute-0 python3.9[225293]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:49:51 compute-0 systemd[1]: Reloading.
Nov 26 12:49:51 compute-0 systemd-sysv-generator[225320]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:49:51 compute-0 systemd-rc-local-generator[225317]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:49:51 compute-0 systemd[1]: Starting Create netns directory...
Nov 26 12:49:51 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 12:49:51 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 12:49:51 compute-0 systemd[1]: Finished Create netns directory.
Nov 26 12:49:51 compute-0 sudo[225291]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:52 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:52 compute-0 sudo[225483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmgnraogmhffbwvqqkzzcpsqjpqdqhax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161391.947207-429-50144617044730/AnsiballZ_file.py'
Nov 26 12:49:52 compute-0 sudo[225483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:52 compute-0 python3.9[225485]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:49:52 compute-0 sudo[225483]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:52 compute-0 sudo[225635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooogtojrljvilfuscdjibdvhkfvivgfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161392.4157836-437-180485806795733/AnsiballZ_stat.py'
Nov 26 12:49:52 compute-0 sudo[225635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:52 compute-0 python3.9[225637]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:52 compute-0 sudo[225635]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:52 compute-0 sudo[225758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krboqwqjialaaextaiklcrwnhimaayfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161392.4157836-437-180485806795733/AnsiballZ_copy.py'
Nov 26 12:49:52 compute-0 sudo[225758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:53 compute-0 python3.9[225760]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161392.4157836-437-180485806795733/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:49:53 compute-0 sudo[225758]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:53 compute-0 ceph-mon[74966]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:53 compute-0 sudo[225910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clatgppbusmaxplblkltwbvodoeddhnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161393.4024-454-47608417160174/AnsiballZ_file.py'
Nov 26 12:49:53 compute-0 sudo[225910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:53 compute-0 python3.9[225912]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:49:53 compute-0 sudo[225910]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:54 compute-0 sudo[226062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rynpxtakawuvxicmjktpckhqoenzijvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161393.8781934-462-199446952216618/AnsiballZ_stat.py'
Nov 26 12:49:54 compute-0 sudo[226062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:54 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:54 compute-0 python3.9[226064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:49:54 compute-0 sudo[226062]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:54 compute-0 sudo[226185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxmbxfxxkjqtgvggasmfadeqyxsbnyqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161393.8781934-462-199446952216618/AnsiballZ_copy.py'
Nov 26 12:49:54 compute-0 sudo[226185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:54 compute-0 python3.9[226187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161393.8781934-462-199446952216618/.source.json _original_basename=.hatuph8k follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:54 compute-0 sudo[226185]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:54 compute-0 sudo[226337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnhbrehrnspgcsaspkbztbfbqwrmbjlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161394.7048807-477-273439289322266/AnsiballZ_file.py'
Nov 26 12:49:54 compute-0 sudo[226337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:55 compute-0 python3.9[226339]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:49:55 compute-0 sudo[226337]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:55 compute-0 ceph-mon[74966]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:55 compute-0 sudo[226489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgluheeipsntcyiuyevnjjhkepjxxbmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161395.182263-485-150215196912321/AnsiballZ_stat.py'
Nov 26 12:49:55 compute-0 sudo[226489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:55 compute-0 sudo[226489]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:55 compute-0 sudo[226612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqjtjvqpsjajriteloftvngvxqgakngz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161395.182263-485-150215196912321/AnsiballZ_copy.py'
Nov 26 12:49:55 compute-0 sudo[226612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:55 compute-0 sudo[226612]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:49:56 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:56 compute-0 sudo[226764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lreickylrfwygcfmawbekfzzrptaricj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161396.070095-502-138992490793738/AnsiballZ_container_config_data.py'
Nov 26 12:49:56 compute-0 sudo[226764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:56 compute-0 python3.9[226766]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 26 12:49:56 compute-0 sudo[226764]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:56 compute-0 sudo[226916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpduxlicrtbujujsgdkrfxcjqreutkol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161396.659953-511-232427926701375/AnsiballZ_container_config_hash.py'
Nov 26 12:49:56 compute-0 sudo[226916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:57 compute-0 python3.9[226918]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 12:49:57 compute-0 sudo[226916]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:57 compute-0 ceph-mon[74966]: pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:57 compute-0 sudo[227068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzmsazhdutlofwnqvhcozyhpmtdmqbqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161397.2585166-520-164923611974457/AnsiballZ_podman_container_info.py'
Nov 26 12:49:57 compute-0 sudo[227068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:57 compute-0 python3.9[227070]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 26 12:49:57 compute-0 sudo[227068]: pam_unix(sudo:session): session closed for user root
Nov 26 12:49:58 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:49:58 compute-0 sudo[227239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvicbsuxniddpvtwhbftlmttynnkztmr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764161398.2867162-533-50106142140239/AnsiballZ_edpm_container_manage.py'
Nov 26 12:49:58 compute-0 sudo[227239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:49:58 compute-0 python3[227241]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 12:49:59 compute-0 ceph-mon[74966]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:00 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:00 compute-0 podman[227252]: 2025-11-26 12:50:00.423867713 +0000 UTC m=+1.511643241 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 26 12:50:00 compute-0 podman[227297]: 2025-11-26 12:50:00.55607657 +0000 UTC m=+0.033579438 container create fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:50:00 compute-0 podman[227297]: 2025-11-26 12:50:00.539145526 +0000 UTC m=+0.016648423 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 26 12:50:00 compute-0 python3[227241]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 26 12:50:00 compute-0 sudo[227239]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:01 compute-0 sudo[227475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqvlmnolnosyijedrnhomhmytkriuvpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161400.8130012-541-27863696163714/AnsiballZ_stat.py'
Nov 26 12:50:01 compute-0 sudo[227475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:50:01 compute-0 python3.9[227477]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:50:01 compute-0 sudo[227475]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:01 compute-0 ceph-mon[74966]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:01 compute-0 sudo[227629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmqsxdagizxdtdnhcyyaqynqojsfaovt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161401.379786-550-190121451994912/AnsiballZ_file.py'
Nov 26 12:50:01 compute-0 sudo[227629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:50:01.724 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:50:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:50:01.725 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:50:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:50:01.725 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:50:01 compute-0 python3.9[227631]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:01 compute-0 sudo[227629]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:01 compute-0 sudo[227632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:50:01 compute-0 sudo[227632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:01 compute-0 sudo[227632]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:01 compute-0 sudo[227657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:50:01 compute-0 sudo[227657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:01 compute-0 sudo[227657]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:01 compute-0 sudo[227705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:50:01 compute-0 sudo[227705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:01 compute-0 sudo[227705]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:01 compute-0 sudo[227754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 12:50:01 compute-0 sudo[227754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:01 compute-0 sudo[227804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvshwgwzzyvxeuqzlyfjupfxpmwcrlyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161401.379786-550-190121451994912/AnsiballZ_stat.py'
Nov 26 12:50:01 compute-0 sudo[227804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:02 compute-0 python3.9[227807]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:50:02 compute-0 sudo[227804]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:02 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:02 compute-0 podman[227917]: 2025-11-26 12:50:02.310696687 +0000 UTC m=+0.055156449 container exec ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 12:50:02 compute-0 podman[227917]: 2025-11-26 12:50:02.390253501 +0000 UTC m=+0.134713263 container exec_died ba65664ab41f80b9105342861c31c0fd030236b6624fe1c91b51915b19d6c537 (image=quay.io/ceph/ceph:v18, name=ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 12:50:02 compute-0 sudo[228057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nccjlgglqnkqxnauwvpidhpoeloctiwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161402.1583614-550-216499144774527/AnsiballZ_copy.py'
Nov 26 12:50:02 compute-0 sudo[228057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:02 compute-0 python3.9[228067]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764161402.1583614-550-216499144774527/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:02 compute-0 podman[228088]: 2025-11-26 12:50:02.692554613 +0000 UTC m=+0.054639065 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 12:50:02 compute-0 sudo[228057]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:02 compute-0 sudo[227754]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:50:02 compute-0 sudo[228235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtaklfblfkuxgdfsruahrzfiaslhndvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161402.1583614-550-216499144774527/AnsiballZ_systemd.py'
Nov 26 12:50:02 compute-0 sudo[228235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:02 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:50:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:50:02 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:50:02 compute-0 sudo[228238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:50:02 compute-0 sudo[228238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:02 compute-0 sudo[228238]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:03 compute-0 sudo[228263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:50:03 compute-0 sudo[228263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:03 compute-0 sudo[228263]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:03 compute-0 sudo[228288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:50:03 compute-0 sudo[228288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:03 compute-0 sudo[228288]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:03 compute-0 sudo[228313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:50:03 compute-0 sudo[228313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:03 compute-0 python3.9[228237]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 12:50:03 compute-0 systemd[1]: Reloading.
Nov 26 12:50:03 compute-0 ceph-mon[74966]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:03 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:50:03 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:50:03 compute-0 systemd-sysv-generator[228372]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:50:03 compute-0 systemd-rc-local-generator[228366]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:50:03 compute-0 sudo[228235]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:03 compute-0 sudo[228313]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:50:03 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:50:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:50:03 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:50:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:50:03 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:50:03 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 842686d2-2fff-4076-9c56-91403ce010df does not exist
Nov 26 12:50:03 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev e968660b-3a79-4702-85a5-0089076802c6 does not exist
Nov 26 12:50:03 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev cb478dc7-42d3-4908-8ba3-444a6ae24db9 does not exist
Nov 26 12:50:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:50:03 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:50:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:50:03 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:50:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:50:03 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:50:03 compute-0 sudo[228423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:50:03 compute-0 sudo[228423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:03 compute-0 sudo[228423]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:03 compute-0 sudo[228471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:50:03 compute-0 sudo[228471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:03 compute-0 sudo[228471]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:03 compute-0 sudo[228527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjszwixxdusjdcgclrxtfhilqrmfdydn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161402.1583614-550-216499144774527/AnsiballZ_systemd.py'
Nov 26 12:50:03 compute-0 sudo[228527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:03 compute-0 sudo[228520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:50:03 compute-0 sudo[228520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:03 compute-0 sudo[228520]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:03 compute-0 sudo[228551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:50:03 compute-0 sudo[228551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:03 compute-0 python3.9[228543]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:50:04 compute-0 systemd[1]: Reloading.
Nov 26 12:50:04 compute-0 podman[228610]: 2025-11-26 12:50:04.086904936 +0000 UTC m=+0.044686438 container create 2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:50:04 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:04 compute-0 systemd-rc-local-generator[228643]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:50:04 compute-0 systemd-sysv-generator[228649]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:50:04 compute-0 podman[228610]: 2025-11-26 12:50:04.067860874 +0000 UTC m=+0.025642396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:50:04 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:50:04 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:50:04 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:50:04 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:50:04 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:50:04 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:50:04 compute-0 systemd[1]: Started libpod-conmon-2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7.scope.
Nov 26 12:50:04 compute-0 systemd[1]: Starting multipathd container...
Nov 26 12:50:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:50:04 compute-0 podman[228610]: 2025-11-26 12:50:04.370415171 +0000 UTC m=+0.328196694 container init 2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:50:04 compute-0 podman[228610]: 2025-11-26 12:50:04.377790647 +0000 UTC m=+0.335572149 container start 2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:50:04 compute-0 podman[228610]: 2025-11-26 12:50:04.381057677 +0000 UTC m=+0.338839199 container attach 2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 12:50:04 compute-0 systemd[1]: libpod-2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7.scope: Deactivated successfully.
Nov 26 12:50:04 compute-0 peaceful_mahavira[228659]: 167 167
Nov 26 12:50:04 compute-0 conmon[228659]: conmon 2bb6433c28429467e6a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7.scope/container/memory.events
Nov 26 12:50:04 compute-0 podman[228672]: 2025-11-26 12:50:04.430833956 +0000 UTC m=+0.026269857 container died 2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 12:50:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8f06bab68b21e3fd5367ae24e1029ab44e8c537fab9cac34e1390bdb0ebe49/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8f06bab68b21e3fd5367ae24e1029ab44e8c537fab9cac34e1390bdb0ebe49/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-00cd57d77db9aa25c86b96fe345dda1552d71e29f386740550d31a36292565d4-merged.mount: Deactivated successfully.
Nov 26 12:50:04 compute-0 podman[228672]: 2025-11-26 12:50:04.459162027 +0000 UTC m=+0.054597928 container remove 2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 12:50:04 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636.
Nov 26 12:50:04 compute-0 systemd[1]: libpod-conmon-2bb6433c28429467e6a8084d26585166c75640f05299f591cf6de415b38bdca7.scope: Deactivated successfully.
Nov 26 12:50:04 compute-0 podman[228661]: 2025-11-26 12:50:04.473602872 +0000 UTC m=+0.107875888 container init fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 12:50:04 compute-0 multipathd[228682]: + sudo -E kolla_set_configs
Nov 26 12:50:04 compute-0 podman[228661]: 2025-11-26 12:50:04.49245287 +0000 UTC m=+0.126725886 container start fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 26 12:50:04 compute-0 podman[228661]: multipathd
Nov 26 12:50:04 compute-0 systemd[1]: Started multipathd container.
Nov 26 12:50:04 compute-0 sudo[228695]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 26 12:50:04 compute-0 sudo[228695]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 26 12:50:04 compute-0 sudo[228695]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 26 12:50:04 compute-0 sudo[228527]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:04 compute-0 multipathd[228682]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 12:50:04 compute-0 multipathd[228682]: INFO:__main__:Validating config file
Nov 26 12:50:04 compute-0 multipathd[228682]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 12:50:04 compute-0 multipathd[228682]: INFO:__main__:Writing out command to execute
Nov 26 12:50:04 compute-0 sudo[228695]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:04 compute-0 multipathd[228682]: ++ cat /run_command
Nov 26 12:50:04 compute-0 multipathd[228682]: + CMD='/usr/sbin/multipathd -d'
Nov 26 12:50:04 compute-0 multipathd[228682]: + ARGS=
Nov 26 12:50:04 compute-0 multipathd[228682]: + sudo kolla_copy_cacerts
Nov 26 12:50:04 compute-0 sudo[228739]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 26 12:50:04 compute-0 sudo[228739]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 26 12:50:04 compute-0 sudo[228739]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 26 12:50:04 compute-0 sudo[228739]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:04 compute-0 multipathd[228682]: + [[ ! -n '' ]]
Nov 26 12:50:04 compute-0 multipathd[228682]: + . kolla_extend_start
Nov 26 12:50:04 compute-0 multipathd[228682]: Running command: '/usr/sbin/multipathd -d'
Nov 26 12:50:04 compute-0 multipathd[228682]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 26 12:50:04 compute-0 multipathd[228682]: + umask 0022
Nov 26 12:50:04 compute-0 multipathd[228682]: + exec /usr/sbin/multipathd -d
Nov 26 12:50:04 compute-0 podman[228696]: 2025-11-26 12:50:04.60948831 +0000 UTC m=+0.101418942 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 26 12:50:04 compute-0 multipathd[228682]: 2506.228861 | --------start up--------
Nov 26 12:50:04 compute-0 multipathd[228682]: 2506.228875 | read /etc/multipath.conf
Nov 26 12:50:04 compute-0 systemd[1]: fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636-2b53ac392e683dac.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 12:50:04 compute-0 multipathd[228682]: 2506.233896 | path checkers start up
Nov 26 12:50:04 compute-0 systemd[1]: fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636-2b53ac392e683dac.service: Failed with result 'exit-code'.
Nov 26 12:50:04 compute-0 podman[228738]: 2025-11-26 12:50:04.650527571 +0000 UTC m=+0.057702843 container create d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:50:04 compute-0 systemd[1]: Started libpod-conmon-d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213.scope.
Nov 26 12:50:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5f48dc6746f11e5faacf8cdadba3aa69aecc440bf5674d18ebb96c5eee17c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5f48dc6746f11e5faacf8cdadba3aa69aecc440bf5674d18ebb96c5eee17c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5f48dc6746f11e5faacf8cdadba3aa69aecc440bf5674d18ebb96c5eee17c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5f48dc6746f11e5faacf8cdadba3aa69aecc440bf5674d18ebb96c5eee17c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5f48dc6746f11e5faacf8cdadba3aa69aecc440bf5674d18ebb96c5eee17c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:04 compute-0 podman[228738]: 2025-11-26 12:50:04.724982279 +0000 UTC m=+0.132157572 container init d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 12:50:04 compute-0 podman[228738]: 2025-11-26 12:50:04.631354856 +0000 UTC m=+0.038530148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:50:04 compute-0 podman[228738]: 2025-11-26 12:50:04.732278946 +0000 UTC m=+0.139454217 container start d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 12:50:04 compute-0 podman[228738]: 2025-11-26 12:50:04.733574555 +0000 UTC m=+0.140749827 container attach d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 12:50:05 compute-0 python3.9[228898]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:50:05 compute-0 ceph-mon[74966]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:05 compute-0 sudo[229059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heklfjsgdaemmnturdygvgxjgbmkuyrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161405.2084966-586-65423154269477/AnsiballZ_command.py'
Nov 26 12:50:05 compute-0 sudo[229059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:05 compute-0 python3.9[229063]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:50:05 compute-0 intelligent_bassi[228786]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:50:05 compute-0 intelligent_bassi[228786]: --> relative data size: 1.0
Nov 26 12:50:05 compute-0 intelligent_bassi[228786]: --> All data devices are unavailable
Nov 26 12:50:05 compute-0 systemd[1]: libpod-d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213.scope: Deactivated successfully.
Nov 26 12:50:05 compute-0 podman[228738]: 2025-11-26 12:50:05.624426252 +0000 UTC m=+1.031601524 container died d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:50:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e5f48dc6746f11e5faacf8cdadba3aa69aecc440bf5674d18ebb96c5eee17c4-merged.mount: Deactivated successfully.
Nov 26 12:50:05 compute-0 sudo[229059]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:05 compute-0 podman[228738]: 2025-11-26 12:50:05.667870951 +0000 UTC m=+1.075046223 container remove d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bassi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 12:50:05 compute-0 systemd[1]: libpod-conmon-d0158ce6598fcb19cdfdf6d73058b1a8f6372a9c54bbdedf362ad32051203213.scope: Deactivated successfully.
Nov 26 12:50:05 compute-0 sudo[228551]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:05 compute-0 sudo[229119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:50:05 compute-0 sudo[229119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:05 compute-0 sudo[229119]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:05 compute-0 sudo[229145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:50:05 compute-0 sudo[229145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:05 compute-0 sudo[229145]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:05 compute-0 sudo[229196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:50:05 compute-0 sudo[229196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:05 compute-0 sudo[229196]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:05 compute-0 sudo[229247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:50:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:50:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:50:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:50:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:50:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:50:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:50:05 compute-0 sudo[229247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:06 compute-0 sudo[229344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utapmcmnadeqticzodqzhgsbkivknuol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161405.7846227-594-30103071005408/AnsiballZ_systemd.py'
Nov 26 12:50:06 compute-0 sudo[229344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:50:06 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:06 compute-0 podman[229378]: 2025-11-26 12:50:06.202496066 +0000 UTC m=+0.037641204 container create deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 12:50:06 compute-0 systemd[1]: Started libpod-conmon-deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679.scope.
Nov 26 12:50:06 compute-0 python3.9[229348]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:50:06 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:50:06 compute-0 podman[229378]: 2025-11-26 12:50:06.277165438 +0000 UTC m=+0.112310596 container init deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 12:50:06 compute-0 podman[229378]: 2025-11-26 12:50:06.18726374 +0000 UTC m=+0.022408908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:50:06 compute-0 podman[229378]: 2025-11-26 12:50:06.283692085 +0000 UTC m=+0.118837224 container start deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 12:50:06 compute-0 podman[229378]: 2025-11-26 12:50:06.284898877 +0000 UTC m=+0.120044016 container attach deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:50:06 compute-0 condescending_proskuriakova[229391]: 167 167
Nov 26 12:50:06 compute-0 podman[229378]: 2025-11-26 12:50:06.287918923 +0000 UTC m=+0.123064061 container died deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 12:50:06 compute-0 systemd[1]: libpod-deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679.scope: Deactivated successfully.
Nov 26 12:50:06 compute-0 systemd[1]: Stopping multipathd container...
Nov 26 12:50:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9aface5e9abb9e150b8425e12b2a79bfa83ad77e4450bebe56d9c56fc248b133-merged.mount: Deactivated successfully.
Nov 26 12:50:06 compute-0 podman[229378]: 2025-11-26 12:50:06.316446068 +0000 UTC m=+0.151591206 container remove deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_proskuriakova, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 12:50:06 compute-0 systemd[1]: libpod-conmon-deab93b67465f8035838c2b0396b339056426251e8b6e8d68ec51d2291894679.scope: Deactivated successfully.
Nov 26 12:50:06 compute-0 multipathd[228682]: 2507.971208 | exit (signal)
Nov 26 12:50:06 compute-0 multipathd[228682]: 2507.971655 | --------shut down-------
Nov 26 12:50:06 compute-0 systemd[1]: libpod-fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636.scope: Deactivated successfully.
Nov 26 12:50:06 compute-0 podman[229400]: 2025-11-26 12:50:06.377195926 +0000 UTC m=+0.063298066 container died fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:50:06 compute-0 systemd[1]: fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636-2b53ac392e683dac.timer: Deactivated successfully.
Nov 26 12:50:06 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636.
Nov 26 12:50:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636-userdata-shm.mount: Deactivated successfully.
Nov 26 12:50:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a8f06bab68b21e3fd5367ae24e1029ab44e8c537fab9cac34e1390bdb0ebe49-merged.mount: Deactivated successfully.
Nov 26 12:50:06 compute-0 podman[229400]: 2025-11-26 12:50:06.429145316 +0000 UTC m=+0.115247456 container cleanup fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 12:50:06 compute-0 podman[229400]: multipathd
Nov 26 12:50:06 compute-0 podman[229438]: 2025-11-26 12:50:06.472360102 +0000 UTC m=+0.036193738 container create d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 12:50:06 compute-0 podman[229447]: multipathd
Nov 26 12:50:06 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 26 12:50:06 compute-0 systemd[1]: Stopped multipathd container.
Nov 26 12:50:06 compute-0 systemd[1]: Starting multipathd container...
Nov 26 12:50:06 compute-0 systemd[1]: Started libpod-conmon-d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469.scope.
Nov 26 12:50:06 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301badca497ba2ff74888eae09fb817777ca250724f1e3b28c9d5b2c038510ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301badca497ba2ff74888eae09fb817777ca250724f1e3b28c9d5b2c038510ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301badca497ba2ff74888eae09fb817777ca250724f1e3b28c9d5b2c038510ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301badca497ba2ff74888eae09fb817777ca250724f1e3b28c9d5b2c038510ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:06 compute-0 podman[229438]: 2025-11-26 12:50:06.55323646 +0000 UTC m=+0.117070097 container init d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 12:50:06 compute-0 podman[229438]: 2025-11-26 12:50:06.458972238 +0000 UTC m=+0.022805895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:50:06 compute-0 podman[229438]: 2025-11-26 12:50:06.560714688 +0000 UTC m=+0.124548325 container start d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 12:50:06 compute-0 podman[229438]: 2025-11-26 12:50:06.562068116 +0000 UTC m=+0.125901753 container attach d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 12:50:06 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8f06bab68b21e3fd5367ae24e1029ab44e8c537fab9cac34e1390bdb0ebe49/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8f06bab68b21e3fd5367ae24e1029ab44e8c537fab9cac34e1390bdb0ebe49/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:06 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636.
Nov 26 12:50:06 compute-0 podman[229462]: 2025-11-26 12:50:06.627565962 +0000 UTC m=+0.092452952 container init fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:50:06 compute-0 multipathd[229479]: + sudo -E kolla_set_configs
Nov 26 12:50:06 compute-0 sudo[229485]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 26 12:50:06 compute-0 sudo[229485]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 26 12:50:06 compute-0 sudo[229485]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 26 12:50:06 compute-0 podman[229462]: 2025-11-26 12:50:06.661791946 +0000 UTC m=+0.126678926 container start fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 12:50:06 compute-0 podman[229462]: multipathd
Nov 26 12:50:06 compute-0 systemd[1]: Started multipathd container.
Nov 26 12:50:06 compute-0 multipathd[229479]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 12:50:06 compute-0 multipathd[229479]: INFO:__main__:Validating config file
Nov 26 12:50:06 compute-0 multipathd[229479]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 12:50:06 compute-0 multipathd[229479]: INFO:__main__:Writing out command to execute
Nov 26 12:50:06 compute-0 sudo[229485]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:06 compute-0 multipathd[229479]: ++ cat /run_command
Nov 26 12:50:06 compute-0 sudo[229344]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:06 compute-0 multipathd[229479]: + CMD='/usr/sbin/multipathd -d'
Nov 26 12:50:06 compute-0 multipathd[229479]: + ARGS=
Nov 26 12:50:06 compute-0 multipathd[229479]: + sudo kolla_copy_cacerts
Nov 26 12:50:06 compute-0 sudo[229503]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 26 12:50:06 compute-0 sudo[229503]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 26 12:50:06 compute-0 sudo[229503]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 26 12:50:06 compute-0 sudo[229503]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:06 compute-0 podman[229486]: 2025-11-26 12:50:06.723507692 +0000 UTC m=+0.063570128 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 12:50:06 compute-0 multipathd[229479]: + [[ ! -n '' ]]
Nov 26 12:50:06 compute-0 multipathd[229479]: + . kolla_extend_start
Nov 26 12:50:06 compute-0 multipathd[229479]: Running command: '/usr/sbin/multipathd -d'
Nov 26 12:50:06 compute-0 multipathd[229479]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 26 12:50:06 compute-0 multipathd[229479]: + umask 0022
Nov 26 12:50:06 compute-0 multipathd[229479]: + exec /usr/sbin/multipathd -d
Nov 26 12:50:06 compute-0 systemd[1]: fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636-15acf925f6a3cf08.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 12:50:06 compute-0 systemd[1]: fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636-15acf925f6a3cf08.service: Failed with result 'exit-code'.
Nov 26 12:50:06 compute-0 multipathd[229479]: 2508.355303 | --------start up--------
Nov 26 12:50:06 compute-0 multipathd[229479]: 2508.355371 | read /etc/multipath.conf
Nov 26 12:50:06 compute-0 multipathd[229479]: 2508.360705 | path checkers start up
Nov 26 12:50:07 compute-0 sudo[229665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhaxhbdwfnousworfbdmkjrdchckgoci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161406.8579526-602-244991253514787/AnsiballZ_file.py'
Nov 26 12:50:07 compute-0 sudo[229665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:07 compute-0 python3.9[229667]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:07 compute-0 sudo[229665]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:07 compute-0 great_agnesi[229463]: {
Nov 26 12:50:07 compute-0 great_agnesi[229463]:     "0": [
Nov 26 12:50:07 compute-0 great_agnesi[229463]:         {
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "devices": [
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "/dev/loop3"
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             ],
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "lv_name": "ceph_lv0",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "lv_size": "21470642176",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "name": "ceph_lv0",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "tags": {
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.cluster_name": "ceph",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.crush_device_class": "",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.encrypted": "0",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.osd_id": "0",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.type": "block",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.vdo": "0"
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             },
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "type": "block",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "vg_name": "ceph_vg0"
Nov 26 12:50:07 compute-0 great_agnesi[229463]:         }
Nov 26 12:50:07 compute-0 great_agnesi[229463]:     ],
Nov 26 12:50:07 compute-0 great_agnesi[229463]:     "1": [
Nov 26 12:50:07 compute-0 great_agnesi[229463]:         {
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "devices": [
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "/dev/loop4"
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             ],
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "lv_name": "ceph_lv1",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "lv_size": "21470642176",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "name": "ceph_lv1",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "tags": {
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.cluster_name": "ceph",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.crush_device_class": "",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.encrypted": "0",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.osd_id": "1",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.type": "block",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.vdo": "0"
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             },
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "type": "block",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "vg_name": "ceph_vg1"
Nov 26 12:50:07 compute-0 great_agnesi[229463]:         }
Nov 26 12:50:07 compute-0 great_agnesi[229463]:     ],
Nov 26 12:50:07 compute-0 great_agnesi[229463]:     "2": [
Nov 26 12:50:07 compute-0 great_agnesi[229463]:         {
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "devices": [
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "/dev/loop5"
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             ],
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "lv_name": "ceph_lv2",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "lv_size": "21470642176",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "name": "ceph_lv2",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "tags": {
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.cluster_name": "ceph",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.crush_device_class": "",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.encrypted": "0",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.osd_id": "2",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.type": "block",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:                 "ceph.vdo": "0"
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             },
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "type": "block",
Nov 26 12:50:07 compute-0 great_agnesi[229463]:             "vg_name": "ceph_vg2"
Nov 26 12:50:07 compute-0 great_agnesi[229463]:         }
Nov 26 12:50:07 compute-0 great_agnesi[229463]:     ]
Nov 26 12:50:07 compute-0 great_agnesi[229463]: }
Nov 26 12:50:07 compute-0 ceph-mon[74966]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:07 compute-0 systemd[1]: libpod-d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469.scope: Deactivated successfully.
Nov 26 12:50:07 compute-0 conmon[229463]: conmon d0e44d8ace9d3a4666fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469.scope/container/memory.events
Nov 26 12:50:07 compute-0 podman[229438]: 2025-11-26 12:50:07.263731367 +0000 UTC m=+0.827565004 container died d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:50:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-301badca497ba2ff74888eae09fb817777ca250724f1e3b28c9d5b2c038510ac-merged.mount: Deactivated successfully.
Nov 26 12:50:07 compute-0 podman[229438]: 2025-11-26 12:50:07.295713817 +0000 UTC m=+0.859547454 container remove d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:50:07 compute-0 systemd[1]: libpod-conmon-d0e44d8ace9d3a4666fc33d7ca8e444a9c5022b8ef84a44abf7dd2bf5d9aa469.scope: Deactivated successfully.
Nov 26 12:50:07 compute-0 sudo[229247]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:07 compute-0 sudo[229706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:50:07 compute-0 sudo[229706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:07 compute-0 sudo[229706]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:07 compute-0 sudo[229731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:50:07 compute-0 sudo[229731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:07 compute-0 sudo[229731]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:07 compute-0 sudo[229756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:50:07 compute-0 sudo[229756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:07 compute-0 sudo[229756]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:07 compute-0 sudo[229781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:50:07 compute-0 sudo[229781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:07 compute-0 sudo[229953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvpfpbdqacimhlbdkgnrwryykivbaqsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161407.4954724-614-162837765393811/AnsiballZ_file.py'
Nov 26 12:50:07 compute-0 sudo[229953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:07 compute-0 podman[229964]: 2025-11-26 12:50:07.744572759 +0000 UTC m=+0.027524349 container create 52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:50:07 compute-0 systemd[1]: Started libpod-conmon-52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00.scope.
Nov 26 12:50:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:50:07 compute-0 podman[229964]: 2025-11-26 12:50:07.801068708 +0000 UTC m=+0.084020298 container init 52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 12:50:07 compute-0 podman[229964]: 2025-11-26 12:50:07.80584506 +0000 UTC m=+0.088796650 container start 52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:50:07 compute-0 eager_edison[229977]: 167 167
Nov 26 12:50:07 compute-0 systemd[1]: libpod-52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00.scope: Deactivated successfully.
Nov 26 12:50:07 compute-0 conmon[229977]: conmon 52223a633795e845e42d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00.scope/container/memory.events
Nov 26 12:50:07 compute-0 podman[229964]: 2025-11-26 12:50:07.811199941 +0000 UTC m=+0.094151550 container attach 52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:50:07 compute-0 podman[229964]: 2025-11-26 12:50:07.811410858 +0000 UTC m=+0.094362448 container died 52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:50:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad9b1dd525522c83e616199e712b5ecdca17a7fe6a0e5c5f47b10678bf50be8a-merged.mount: Deactivated successfully.
Nov 26 12:50:07 compute-0 podman[229964]: 2025-11-26 12:50:07.73370545 +0000 UTC m=+0.016657059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:50:07 compute-0 podman[229964]: 2025-11-26 12:50:07.834275393 +0000 UTC m=+0.117226983 container remove 52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:50:07 compute-0 systemd[1]: libpod-conmon-52223a633795e845e42d55be13ac2f7080f1d9ef4b9e0ab1032c299b00507a00.scope: Deactivated successfully.
Nov 26 12:50:07 compute-0 python3.9[229961]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 12:50:07 compute-0 sudo[229953]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:07 compute-0 podman[230022]: 2025-11-26 12:50:07.957144517 +0000 UTC m=+0.030150282 container create 46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:50:07 compute-0 systemd[1]: Started libpod-conmon-46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e.scope.
Nov 26 12:50:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be84675484252c526c4406838029db029137523379c6bf2ac5e1509bc40099b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be84675484252c526c4406838029db029137523379c6bf2ac5e1509bc40099b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be84675484252c526c4406838029db029137523379c6bf2ac5e1509bc40099b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be84675484252c526c4406838029db029137523379c6bf2ac5e1509bc40099b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:50:08 compute-0 podman[230022]: 2025-11-26 12:50:08.012487925 +0000 UTC m=+0.085493701 container init 46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:50:08 compute-0 podman[230022]: 2025-11-26 12:50:08.018177166 +0000 UTC m=+0.091182932 container start 46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 12:50:08 compute-0 podman[230022]: 2025-11-26 12:50:08.019249716 +0000 UTC m=+0.092255481 container attach 46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:50:08 compute-0 podman[230022]: 2025-11-26 12:50:07.945037714 +0000 UTC m=+0.018043501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:50:08 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:08 compute-0 sudo[230165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avqadedoqnqdjhhkxcofatpjtnmhqjvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161407.9920142-622-205631784854642/AnsiballZ_modprobe.py'
Nov 26 12:50:08 compute-0 sudo[230165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:08 compute-0 python3.9[230167]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 26 12:50:08 compute-0 kernel: Key type psk registered
Nov 26 12:50:08 compute-0 sudo[230165]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:08 compute-0 sudo[230342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxkqempackcwbzdqtpeqsqthewtflmlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161408.4976869-630-144457494151988/AnsiballZ_stat.py'
Nov 26 12:50:08 compute-0 sudo[230342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]: {
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:         "osd_id": 1,
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:         "type": "bluestore"
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:     },
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:         "osd_id": 2,
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:         "type": "bluestore"
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:     },
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:         "osd_id": 0,
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:         "type": "bluestore"
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]:     }
Nov 26 12:50:08 compute-0 vibrant_sammet[230055]: }
Nov 26 12:50:08 compute-0 systemd[1]: libpod-46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e.scope: Deactivated successfully.
Nov 26 12:50:08 compute-0 podman[230022]: 2025-11-26 12:50:08.798984684 +0000 UTC m=+0.871990449 container died 46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:50:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-6be84675484252c526c4406838029db029137523379c6bf2ac5e1509bc40099b-merged.mount: Deactivated successfully.
Nov 26 12:50:08 compute-0 podman[230022]: 2025-11-26 12:50:08.832683525 +0000 UTC m=+0.905689291 container remove 46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:50:08 compute-0 systemd[1]: libpod-conmon-46573e9cd4203a22ac3c77ba8709ca3e8a9160ff298d7e1e2a473a4b1d2adf8e.scope: Deactivated successfully.
Nov 26 12:50:08 compute-0 python3.9[230344]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:50:08 compute-0 sudo[229781]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:50:08 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:50:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:50:08 compute-0 sudo[230342]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:08 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:50:08 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 726340b3-d55a-4ff4-9473-a116fecd744e does not exist
Nov 26 12:50:08 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 09b07daf-b843-4bff-bfdf-6641048db691 does not exist
Nov 26 12:50:08 compute-0 sudo[230368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:50:08 compute-0 sudo[230368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:08 compute-0 sudo[230368]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:08 compute-0 sudo[230416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:50:08 compute-0 sudo[230416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:50:08 compute-0 sudo[230416]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:09 compute-0 sudo[230538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eftbyvcsghugwgqqngmwllfvitxbsenb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161408.4976869-630-144457494151988/AnsiballZ_copy.py'
Nov 26 12:50:09 compute-0 sudo[230538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:09 compute-0 python3.9[230540]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764161408.4976869-630-144457494151988/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:09 compute-0 sudo[230538]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:09 compute-0 ceph-mon[74966]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:09 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:50:09 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:50:09 compute-0 sudo[230690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqdktvurnxxuzdxnybmmvyvbuickabys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161409.4116743-646-227591332781139/AnsiballZ_lineinfile.py'
Nov 26 12:50:09 compute-0 sudo[230690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:09 compute-0 python3.9[230692]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:09 compute-0 sudo[230690]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:10 compute-0 sudo[230842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbzgpcpgmzwbxeydqihhkjrogljnkznz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161409.8741727-654-201509360940407/AnsiballZ_systemd.py'
Nov 26 12:50:10 compute-0 sudo[230842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:10 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:10 compute-0 python3.9[230844]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:50:10 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 26 12:50:10 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 26 12:50:10 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 26 12:50:10 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 26 12:50:10 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 26 12:50:10 compute-0 sudo[230842]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:10 compute-0 sudo[230998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfsvuynzxsbqyxocieyohwjaxgpmqokp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161410.519654-662-223131792213929/AnsiballZ_dnf.py'
Nov 26 12:50:10 compute-0 sudo[230998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:10 compute-0 python3.9[231000]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 12:50:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:50:11 compute-0 ceph-mon[74966]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:12 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:12 compute-0 systemd[1]: Reloading.
Nov 26 12:50:12 compute-0 systemd-rc-local-generator[231026]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:50:12 compute-0 systemd-sysv-generator[231029]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:50:13 compute-0 systemd[1]: Reloading.
Nov 26 12:50:13 compute-0 systemd-sysv-generator[231065]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:50:13 compute-0 systemd-rc-local-generator[231062]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:50:13 compute-0 ceph-mon[74966]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:13 compute-0 podman[231077]: 2025-11-26 12:50:13.488401397 +0000 UTC m=+0.063441636 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 12:50:13 compute-0 systemd-logind[777]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 26 12:50:13 compute-0 lvm[231131]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 12:50:13 compute-0 lvm[231131]: VG ceph_vg1 finished
Nov 26 12:50:13 compute-0 lvm[231134]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 12:50:13 compute-0 lvm[231134]: VG ceph_vg0 finished
Nov 26 12:50:13 compute-0 lvm[231135]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 12:50:13 compute-0 lvm[231135]: VG ceph_vg2 finished
Nov 26 12:50:13 compute-0 systemd-logind[777]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 26 12:50:13 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 12:50:13 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 12:50:13 compute-0 systemd[1]: Reloading.
Nov 26 12:50:13 compute-0 systemd-sysv-generator[231203]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:50:13 compute-0 systemd-rc-local-generator[231199]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:50:14 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 12:50:14 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:14 compute-0 sudo[230998]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:14 compute-0 sudo[232493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucwafpemqjswovlybkxmbxaovzxzhohv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161414.4397893-670-94466564081364/AnsiballZ_systemd_service.py'
Nov 26 12:50:14 compute-0 sudo[232493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:14 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 12:50:14 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 12:50:14 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.042s CPU time.
Nov 26 12:50:14 compute-0 systemd[1]: run-rccb0fbeeea4c4cfcb490d5fad865620c.service: Deactivated successfully.
Nov 26 12:50:14 compute-0 python3.9[232495]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:50:14 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 26 12:50:14 compute-0 iscsid[219785]: iscsid shutting down.
Nov 26 12:50:14 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 26 12:50:14 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 26 12:50:14 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 26 12:50:14 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 26 12:50:14 compute-0 systemd[1]: Started Open-iSCSI.
Nov 26 12:50:14 compute-0 sudo[232493]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:15 compute-0 ceph-mon[74966]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:15 compute-0 python3.9[232650]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 12:50:16 compute-0 sudo[232804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnwwecswbpavkxopvfiiaxpdxxcqewxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161415.8427434-688-36953841940409/AnsiballZ_file.py'
Nov 26 12:50:16 compute-0 sudo[232804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:50:16 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:16 compute-0 python3.9[232806]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:16 compute-0 sudo[232804]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:16 compute-0 sudo[232956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiyrawrhuuwwotpavqbrkzgzdbwxfwnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161416.4441872-699-134472186870023/AnsiballZ_systemd_service.py'
Nov 26 12:50:16 compute-0 sudo[232956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:16 compute-0 python3.9[232958]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 12:50:16 compute-0 systemd[1]: Reloading.
Nov 26 12:50:16 compute-0 systemd-sysv-generator[232985]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:50:16 compute-0 systemd-rc-local-generator[232982]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:50:17 compute-0 sudo[232956]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:17 compute-0 ceph-mon[74966]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:17 compute-0 python3.9[233144]: ansible-ansible.builtin.service_facts Invoked
Nov 26 12:50:17 compute-0 network[233161]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 12:50:17 compute-0 network[233162]: 'network-scripts' will be removed from distribution in near future.
Nov 26 12:50:17 compute-0 network[233163]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 12:50:18 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:19 compute-0 ceph-mon[74966]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:19 compute-0 sudo[233436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-metuteojflpxaopdxrpafudnmucfobdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161419.8135266-718-125361022965744/AnsiballZ_systemd_service.py'
Nov 26 12:50:19 compute-0 sudo[233436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:20 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:20 compute-0 python3.9[233438]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:50:20 compute-0 sudo[233436]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:20 compute-0 sudo[233589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kurcsplvfjhepyftctqlsyoyigwruzhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161420.3540218-718-113676742891169/AnsiballZ_systemd_service.py'
Nov 26 12:50:20 compute-0 sudo[233589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:20 compute-0 python3.9[233591]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:50:20 compute-0 sudo[233589]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:50:21 compute-0 sudo[233742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udsjetfixgkqiqdljsxpwlsqvbdpjcwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161420.8972933-718-44739998213145/AnsiballZ_systemd_service.py'
Nov 26 12:50:21 compute-0 sudo[233742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:21 compute-0 ceph-mon[74966]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:21 compute-0 python3.9[233744]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:50:21 compute-0 sudo[233742]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:21 compute-0 sudo[233895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsssqtqxbjxugfhezucqhebitepedmir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161421.4395146-718-44919137396756/AnsiballZ_systemd_service.py'
Nov 26 12:50:21 compute-0 sudo[233895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:21 compute-0 python3.9[233897]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:50:21 compute-0 sudo[233895]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:22 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:22 compute-0 sudo[234048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlikmhnoyflzgqqevjopfjihrmmylkof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161421.980608-718-245526125989769/AnsiballZ_systemd_service.py'
Nov 26 12:50:22 compute-0 sudo[234048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:22 compute-0 python3.9[234050]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:50:22 compute-0 sudo[234048]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:22 compute-0 sudo[234201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxqmoktidrnbrddiobqflitkizvfytaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161422.5401394-718-236497604975830/AnsiballZ_systemd_service.py'
Nov 26 12:50:22 compute-0 sudo[234201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:22 compute-0 python3.9[234203]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:50:23 compute-0 sudo[234201]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:23 compute-0 sudo[234354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veibdaaqyrwfbgikytdjdeejhvyngrsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161423.0878336-718-51854759640073/AnsiballZ_systemd_service.py'
Nov 26 12:50:23 compute-0 sudo[234354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:23 compute-0 ceph-mon[74966]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:23 compute-0 python3.9[234356]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:50:23 compute-0 sudo[234354]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:23 compute-0 sudo[234507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilcdszcwrwpvolrzbkhzkqpotlsjrgzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161423.631047-718-173437662559468/AnsiballZ_systemd_service.py'
Nov 26 12:50:23 compute-0 sudo[234507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:24 compute-0 python3.9[234509]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:50:24 compute-0 sudo[234507]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:24 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:24 compute-0 sudo[234660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfceibrmsdssilxuinkqonbkgcucowus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161424.3050804-777-194318089604412/AnsiballZ_file.py'
Nov 26 12:50:24 compute-0 sudo[234660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:24 compute-0 python3.9[234662]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:24 compute-0 sudo[234660]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:24 compute-0 sudo[234812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmgpukfrqkqgbgzhgeqnemdfomngoipy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161424.73821-777-8962542120335/AnsiballZ_file.py'
Nov 26 12:50:24 compute-0 sudo[234812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:25 compute-0 python3.9[234814]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:25 compute-0 sudo[234812]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:25 compute-0 ceph-mon[74966]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:25 compute-0 sudo[234964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csvfyjphbxekmkuyixvbhevcsrcyobse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161425.177206-777-244730787452887/AnsiballZ_file.py'
Nov 26 12:50:25 compute-0 sudo[234964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:25 compute-0 python3.9[234966]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:25 compute-0 sudo[234964]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:25 compute-0 sudo[235116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kprfixkidortvxcplaprbegjpekbbmnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161425.665197-777-196911781454467/AnsiballZ_file.py'
Nov 26 12:50:25 compute-0 sudo[235116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:50:26 compute-0 python3.9[235118]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:26 compute-0 sudo[235116]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:26 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:26 compute-0 sudo[235268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzajzfbisvsuyoavpsybojkdhchcvlfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161426.148775-777-265923829119862/AnsiballZ_file.py'
Nov 26 12:50:26 compute-0 sudo[235268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:26 compute-0 python3.9[235270]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:26 compute-0 sudo[235268]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:26 compute-0 sudo[235420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdddrtjlujgkhpsokiwxbgkocruwyuxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161426.6183896-777-243660328621246/AnsiballZ_file.py'
Nov 26 12:50:26 compute-0 sudo[235420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:26 compute-0 python3.9[235422]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:26 compute-0 sudo[235420]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:27 compute-0 ceph-mon[74966]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:27 compute-0 sudo[235572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkdnopgxggkpucrldovoekjzvtjxesta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161427.0912747-777-227318129137065/AnsiballZ_file.py'
Nov 26 12:50:27 compute-0 sudo[235572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:27 compute-0 python3.9[235574]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:27 compute-0 sudo[235572]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:27 compute-0 sudo[235724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcwhpnsbruuodyjnkxvvmbqvzieffhma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161427.5548556-777-145393008363379/AnsiballZ_file.py'
Nov 26 12:50:27 compute-0 sudo[235724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:27 compute-0 python3.9[235726]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:27 compute-0 sudo[235724]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:28 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:28 compute-0 sudo[235876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wenhqwaukvsznwxiauwsznsmolyhybme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161428.035278-834-86690349769378/AnsiballZ_file.py'
Nov 26 12:50:28 compute-0 sudo[235876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:28 compute-0 python3.9[235878]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:28 compute-0 sudo[235876]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:28 compute-0 sudo[236028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmvmknaloahwveyhibmwozwisvwpkffh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161428.4749594-834-263228778795490/AnsiballZ_file.py'
Nov 26 12:50:28 compute-0 sudo[236028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:28 compute-0 python3.9[236030]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:28 compute-0 sudo[236028]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:29 compute-0 sudo[236180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwkhpqlcuuzxmzwtsfzyyxjqnbmaejio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161428.9085646-834-36666211094102/AnsiballZ_file.py'
Nov 26 12:50:29 compute-0 sudo[236180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:29 compute-0 python3.9[236182]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:29 compute-0 sudo[236180]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:29 compute-0 ceph-mon[74966]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:29 compute-0 sudo[236332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfdxovjulkpeusfubwdxsufmrfmnfxzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161429.3402092-834-210341494828245/AnsiballZ_file.py'
Nov 26 12:50:29 compute-0 sudo[236332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:29 compute-0 python3.9[236334]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:29 compute-0 sudo[236332]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:29 compute-0 sudo[236484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbkcqqxjohkgovghzrksasdpprgowzge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161429.7474568-834-208733817055509/AnsiballZ_file.py'
Nov 26 12:50:29 compute-0 sudo[236484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:30 compute-0 python3.9[236486]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:30 compute-0 sudo[236484]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:30 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:30 compute-0 sudo[236636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gifgnorulmpvifmtqkzzyyywqddhsjgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161430.157176-834-275149407951966/AnsiballZ_file.py'
Nov 26 12:50:30 compute-0 sudo[236636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:30 compute-0 python3.9[236638]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:30 compute-0 sudo[236636]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:30 compute-0 sudo[236788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsbjwssgiivporjecwgkfyjyqblsszee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161430.568276-834-9532218847917/AnsiballZ_file.py'
Nov 26 12:50:30 compute-0 sudo[236788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:30 compute-0 python3.9[236790]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:30 compute-0 sudo[236788]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:50:31 compute-0 sudo[236940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weaovidijzvblqymyxdxzcakttjyueal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161430.99765-834-198900402272035/AnsiballZ_file.py'
Nov 26 12:50:31 compute-0 sudo[236940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:31 compute-0 ceph-mon[74966]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:31 compute-0 python3.9[236942]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:31 compute-0 sudo[236940]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:31 compute-0 sudo[237092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbbbjeajqpvuybprwordutaltpmtaeai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161431.5228107-892-67811227461615/AnsiballZ_command.py'
Nov 26 12:50:31 compute-0 sudo[237092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:31 compute-0 python3.9[237094]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:50:31 compute-0 sudo[237092]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:32 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:32 compute-0 python3.9[237246]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 12:50:32 compute-0 sudo[237405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jehincfblnwwgyhonermkijftthadjpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161432.5888848-910-201879776868411/AnsiballZ_systemd_service.py'
Nov 26 12:50:32 compute-0 sudo[237405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:32 compute-0 podman[237370]: 2025-11-26 12:50:32.794466083 +0000 UTC m=+0.047891432 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 12:50:33 compute-0 python3.9[237412]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 12:50:33 compute-0 systemd[1]: Reloading.
Nov 26 12:50:33 compute-0 systemd-rc-local-generator[237434]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:50:33 compute-0 systemd-sysv-generator[237437]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:50:33 compute-0 ceph-mon[74966]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:33 compute-0 sudo[237405]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:33 compute-0 sudo[237599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhqmcskhewkxbptxqrxsqntdsqleoqpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161433.4662836-918-96468665162104/AnsiballZ_command.py'
Nov 26 12:50:33 compute-0 sudo[237599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:33 compute-0 python3.9[237601]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:50:33 compute-0 sudo[237599]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:34 compute-0 sudo[237752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dopxxunrbwbzqgxxeyozvyqemyphinpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161433.9285524-918-279398011095869/AnsiballZ_command.py'
Nov 26 12:50:34 compute-0 sudo[237752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:34 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:34 compute-0 python3.9[237754]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:50:34 compute-0 sudo[237752]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:34 compute-0 sudo[237905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojiqmhmgxlclyksilwspwvjductrxzcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161434.4390247-918-271796286952752/AnsiballZ_command.py'
Nov 26 12:50:34 compute-0 sudo[237905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:34 compute-0 python3.9[237907]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:50:34 compute-0 sudo[237905]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:35 compute-0 sudo[238058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mihurzpdcaegcvbofofuupiygunuesyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161434.859451-918-146402098672628/AnsiballZ_command.py'
Nov 26 12:50:35 compute-0 sudo[238058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:35 compute-0 python3.9[238060]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:50:35 compute-0 sudo[238058]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:35 compute-0 ceph-mon[74966]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:35 compute-0 sudo[238211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfyrnbpwnalpbvsicbhtzagfbmiiwrzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161435.285545-918-68467494899570/AnsiballZ_command.py'
Nov 26 12:50:35 compute-0 sudo[238211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:35 compute-0 python3.9[238213]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:50:35 compute-0 sudo[238211]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:35 compute-0 sudo[238364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbvvnqaymwanswrjsknizvyjklbublmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161435.7027998-918-157721127621986/AnsiballZ_command.py'
Nov 26 12:50:35 compute-0 sudo[238364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:50:35
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'volumes', '.mgr', '.rgw.root', 'backups', 'default.rgw.control', 'vms', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:50:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:50:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:50:36 compute-0 python3.9[238366]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:50:36 compute-0 sudo[238364]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:36 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:36 compute-0 sudo[238517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqlfbxzlyxgfdpbxfobrjezqhgksdynf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161436.1402283-918-271805949391236/AnsiballZ_command.py'
Nov 26 12:50:36 compute-0 sudo[238517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:36 compute-0 python3.9[238519]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:50:36 compute-0 sudo[238517]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:36 compute-0 sudo[238670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kygeoekcgkpfwplnljypiagouybwyegs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161436.5667198-918-208252699721766/AnsiballZ_command.py'
Nov 26 12:50:36 compute-0 sudo[238670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:36 compute-0 podman[238672]: 2025-11-26 12:50:36.800345249 +0000 UTC m=+0.046050576 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 26 12:50:36 compute-0 python3.9[238673]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 12:50:36 compute-0 sudo[238670]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:37 compute-0 ceph-mon[74966]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:37 compute-0 sudo[238841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjeerahkwbdfqracxztvvnhajvzcoxop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161437.5612817-997-256485624209273/AnsiballZ_file.py'
Nov 26 12:50:37 compute-0 sudo[238841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:37 compute-0 python3.9[238843]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:37 compute-0 sudo[238841]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:38 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:38 compute-0 sudo[238993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adktlxzzwjmxhflvbumvlnivwhomkhyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161437.9981024-997-109593272897544/AnsiballZ_file.py'
Nov 26 12:50:38 compute-0 sudo[238993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:38 compute-0 python3.9[238995]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:38 compute-0 sudo[238993]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:38 compute-0 sudo[239145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceiijhtyksxonubnzvwllnoktrkvlreo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161438.4423604-997-32642251476643/AnsiballZ_file.py'
Nov 26 12:50:38 compute-0 sudo[239145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:38 compute-0 python3.9[239147]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:38 compute-0 sudo[239145]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:39 compute-0 sudo[239297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlmwilqltgeijuussbwrkpsrldsyzgye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161438.8982377-1019-118748321786678/AnsiballZ_file.py'
Nov 26 12:50:39 compute-0 sudo[239297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:39 compute-0 python3.9[239299]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:39 compute-0 sudo[239297]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:39 compute-0 ceph-mon[74966]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:39 compute-0 sudo[239449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhnzkyubeedfohlsuffreduaevkgxmwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161439.3417485-1019-15373420389274/AnsiballZ_file.py'
Nov 26 12:50:39 compute-0 sudo[239449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:39 compute-0 python3.9[239451]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:39 compute-0 sudo[239449]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:39 compute-0 sudo[239601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izqtdgqfvapamjhbrcoocgnnmieeozcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161439.7874398-1019-102079038734018/AnsiballZ_file.py'
Nov 26 12:50:39 compute-0 sudo[239601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:40 compute-0 python3.9[239603]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:40 compute-0 sudo[239601]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:40 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:40 compute-0 sudo[239753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jokyfwnvwijvuqrqubqcgpamdfgcdeek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161440.2177894-1019-87696833655250/AnsiballZ_file.py'
Nov 26 12:50:40 compute-0 sudo[239753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:40 compute-0 python3.9[239755]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:40 compute-0 sudo[239753]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:40 compute-0 sudo[239905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meejqwwlqtqarrfknejioxyyauljshgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161440.6485765-1019-160040538929976/AnsiballZ_file.py'
Nov 26 12:50:40 compute-0 sudo[239905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:40 compute-0 python3.9[239907]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:40 compute-0 sudo[239905]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:50:41 compute-0 sudo[240057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckbdffxtkvyacotiikrsmmykerizhwkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161441.079093-1019-8752051721184/AnsiballZ_file.py'
Nov 26 12:50:41 compute-0 sudo[240057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:41 compute-0 ceph-mon[74966]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:41 compute-0 python3.9[240059]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:41 compute-0 sudo[240057]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:41 compute-0 sudo[240209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbujcprnaotxhnerwgjwtijcgqgakvoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161441.5368261-1019-67601132172821/AnsiballZ_file.py'
Nov 26 12:50:41 compute-0 sudo[240209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:41 compute-0 python3.9[240211]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:41 compute-0 sudo[240209]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:42 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:43 compute-0 ceph-mon[74966]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:43 compute-0 podman[240236]: 2025-11-26 12:50:43.882278722 +0000 UTC m=+0.051528378 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 26 12:50:44 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:44 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 26 12:50:45 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:50:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:50:45 compute-0 ceph-mon[74966]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:45 compute-0 sudo[240386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egzizztlkfbwcugddumgcuqwrifjdhwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161445.429714-1208-132186887883381/AnsiballZ_getent.py'
Nov 26 12:50:45 compute-0 sudo[240386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:45 compute-0 python3.9[240388]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 26 12:50:45 compute-0 sudo[240386]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:50:46 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:46 compute-0 sudo[240539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tntgjcvkvuvdsucsgrfsuzqewgggyphv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161445.9812665-1216-119480843987895/AnsiballZ_group.py'
Nov 26 12:50:46 compute-0 sudo[240539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:46 compute-0 python3.9[240541]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 12:50:46 compute-0 groupadd[240542]: group added to /etc/group: name=nova, GID=42436
Nov 26 12:50:46 compute-0 groupadd[240542]: group added to /etc/gshadow: name=nova
Nov 26 12:50:46 compute-0 groupadd[240542]: new group: name=nova, GID=42436
Nov 26 12:50:46 compute-0 sudo[240539]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:46 compute-0 sudo[240697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgooxymyoneapattqnqnquoqtegvehtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161446.5730333-1224-276615835815925/AnsiballZ_user.py'
Nov 26 12:50:46 compute-0 sudo[240697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:47 compute-0 python3.9[240699]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 12:50:47 compute-0 useradd[240701]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 26 12:50:47 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 12:50:47 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 12:50:47 compute-0 useradd[240701]: add 'nova' to group 'libvirt'
Nov 26 12:50:47 compute-0 useradd[240701]: add 'nova' to shadow group 'libvirt'
Nov 26 12:50:47 compute-0 sudo[240697]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:47 compute-0 ceph-mon[74966]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:47 compute-0 sshd-session[240733]: Accepted publickey for zuul from 192.168.122.30 port 35194 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:50:47 compute-0 systemd-logind[777]: New session 50 of user zuul.
Nov 26 12:50:47 compute-0 systemd[1]: Started Session 50 of User zuul.
Nov 26 12:50:47 compute-0 sshd-session[240733]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:50:47 compute-0 sshd-session[240736]: Received disconnect from 192.168.122.30 port 35194:11: disconnected by user
Nov 26 12:50:47 compute-0 sshd-session[240736]: Disconnected from user zuul 192.168.122.30 port 35194
Nov 26 12:50:47 compute-0 sshd-session[240733]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:50:47 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 26 12:50:47 compute-0 systemd-logind[777]: Session 50 logged out. Waiting for processes to exit.
Nov 26 12:50:47 compute-0 systemd-logind[777]: Removed session 50.
Nov 26 12:50:48 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:48 compute-0 python3.9[240886]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:50:48 compute-0 python3.9[241007]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161448.1077352-1249-252507792139306/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:49 compute-0 python3.9[241157]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:50:49 compute-0 ceph-mon[74966]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:49 compute-0 python3.9[241233]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:49 compute-0 python3.9[241383]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:50:50 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:50 compute-0 python3.9[241504]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161449.6667883-1249-214256897861032/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:50 compute-0 python3.9[241654]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:50:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:50:51 compute-0 python3.9[241775]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161450.447445-1249-42044207191133/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:51 compute-0 ceph-mon[74966]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:51 compute-0 python3.9[241925]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:50:51 compute-0 python3.9[242046]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161451.230874-1249-34548819520463/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:52 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 26 12:50:52 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 26 12:50:52 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:52 compute-0 python3.9[242198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:50:52 compute-0 python3.9[242319]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161452.0037158-1249-174235084244132/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:52 compute-0 sudo[242469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjezkobzcmzpphouebfitqgpdadvedkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161452.8257833-1332-151063754682417/AnsiballZ_file.py'
Nov 26 12:50:52 compute-0 sudo[242469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:53 compute-0 python3.9[242471]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:53 compute-0 sudo[242469]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:53 compute-0 ceph-mon[74966]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:53 compute-0 sudo[242621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghuvgnvcpjmoxxyembxpjdjjjwgkebjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161453.2642832-1340-58194810802653/AnsiballZ_copy.py'
Nov 26 12:50:53 compute-0 sudo[242621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:53 compute-0 python3.9[242623]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:50:53 compute-0 sudo[242621]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:53 compute-0 sudo[242773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxvdukcnusbzlmjrxmsgnttyzfwbnksj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161453.7111213-1348-111435283684890/AnsiballZ_stat.py'
Nov 26 12:50:53 compute-0 sudo[242773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:54 compute-0 python3.9[242775]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:50:54 compute-0 sudo[242773]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:54 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:54 compute-0 sudo[242925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhkjqydalnfklmztgfueilovvqikjzqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161454.162896-1356-74291724971731/AnsiballZ_stat.py'
Nov 26 12:50:54 compute-0 sudo[242925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:54 compute-0 python3.9[242927]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:50:54 compute-0 sudo[242925]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:54 compute-0 sudo[243048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhrjrjicpyizvxruazienvuwtpwsetpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161454.162896-1356-74291724971731/AnsiballZ_copy.py'
Nov 26 12:50:54 compute-0 sudo[243048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:54 compute-0 python3.9[243050]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764161454.162896-1356-74291724971731/.source _original_basename=.pf2iqhf3 follow=False checksum=590b3b4ab7698d0274add06521820e666b67a1e5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 26 12:50:54 compute-0 sudo[243048]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:55 compute-0 python3.9[243202]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:50:55 compute-0 ceph-mon[74966]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:55 compute-0 python3.9[243354]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:50:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:50:56 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:56 compute-0 python3.9[243475]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161455.501877-1382-31479373457118/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=4c77b2c041a7564aa2c84115117dc8517e9bb9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:56 compute-0 python3.9[243625]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 12:50:57 compute-0 python3.9[243746]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764161456.3783662-1397-101906590068266/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=941d5739094d046b86479403aeaaf0441b82ba11 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 12:50:57 compute-0 ceph-mon[74966]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:57 compute-0 sudo[243896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjjrqkdoibgxbaqngsukoucasacycsdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161457.338359-1414-88882674565288/AnsiballZ_container_config_data.py'
Nov 26 12:50:57 compute-0 sudo[243896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:57 compute-0 python3.9[243898]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 26 12:50:57 compute-0 sudo[243896]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:58 compute-0 sudo[244048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yodjminfznfghahdeqcoxacbjpikkjeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161457.865588-1423-16185581179741/AnsiballZ_container_config_hash.py'
Nov 26 12:50:58 compute-0 sudo[244048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:58 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:50:58 compute-0 python3.9[244050]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 12:50:58 compute-0 sudo[244048]: pam_unix(sudo:session): session closed for user root
Nov 26 12:50:58 compute-0 sudo[244200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkhxvlztrucbfvsdqpcjtxqnrkgqhlhp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764161458.46906-1433-135811731036961/AnsiballZ_edpm_container_manage.py'
Nov 26 12:50:58 compute-0 sudo[244200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:50:58 compute-0 python3[244202]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 12:50:59 compute-0 ceph-mon[74966]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:00 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:51:01 compute-0 ceph-mon[74966]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:51:01.726 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:51:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:51:01.727 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:51:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:51:01.727 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:51:02 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:03 compute-0 ceph-mon[74966]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:04 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:05 compute-0 ceph-mon[74966]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:05 compute-0 podman[244248]: 2025-11-26 12:51:05.623803316 +0000 UTC m=+2.785487840 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 26 12:51:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:51:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:51:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:51:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:51:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:51:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:51:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:51:06 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:07 compute-0 ceph-mon[74966]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:08 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:09 compute-0 sudo[244284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:51:09 compute-0 sudo[244284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:09 compute-0 sudo[244284]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:09 compute-0 sudo[244309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:51:09 compute-0 sudo[244309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:09 compute-0 sudo[244309]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:09 compute-0 sudo[244334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:51:09 compute-0 sudo[244334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:09 compute-0 sudo[244334]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:09 compute-0 sudo[244359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:51:09 compute-0 sudo[244359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:09 compute-0 podman[244278]: 2025-11-26 12:51:09.322946305 +0000 UTC m=+2.491938734 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 12:51:09 compute-0 podman[244213]: 2025-11-26 12:51:09.345434736 +0000 UTC m=+10.401701178 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 26 12:51:09 compute-0 ceph-mon[74966]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:09 compute-0 podman[244427]: 2025-11-26 12:51:09.447006878 +0000 UTC m=+0.031014943 container create 919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible)
Nov 26 12:51:09 compute-0 podman[244427]: 2025-11-26 12:51:09.43312491 +0000 UTC m=+0.017132985 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 26 12:51:09 compute-0 python3[244202]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 26 12:51:09 compute-0 sudo[244359]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:09 compute-0 sudo[244200]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:09 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:51:09 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:51:09 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:51:09 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:51:09 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:51:09 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:51:09 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 4a1f5183-bb3f-4197-8241-e40ebd38a161 does not exist
Nov 26 12:51:09 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 4ed8f87b-28ab-49ca-a546-1d9a56b4ecbc does not exist
Nov 26 12:51:09 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 67eca5af-87bf-4653-b122-7f56ebd5e606 does not exist
Nov 26 12:51:09 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:51:09 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:51:09 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:51:09 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:51:09 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:51:09 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:51:09 compute-0 sudo[244474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:51:09 compute-0 sudo[244474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:09 compute-0 sudo[244474]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:09 compute-0 sudo[244518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:51:09 compute-0 sudo[244518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:09 compute-0 sudo[244518]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:09 compute-0 sudo[244556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:51:09 compute-0 sudo[244556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:09 compute-0 sudo[244556]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:09 compute-0 sudo[244611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:51:09 compute-0 sudo[244611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:09 compute-0 sudo[244720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqnvessebjnttlrmcaccszakjxfakeiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161469.6790214-1441-151498154222177/AnsiballZ_stat.py'
Nov 26 12:51:09 compute-0 sudo[244720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:51:09 compute-0 podman[244753]: 2025-11-26 12:51:09.990545481 +0000 UTC m=+0.030272944 container create f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:51:10 compute-0 python3.9[244728]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:51:10 compute-0 systemd[1]: Started libpod-conmon-f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724.scope.
Nov 26 12:51:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:51:10 compute-0 sudo[244720]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:10 compute-0 podman[244753]: 2025-11-26 12:51:10.053895676 +0000 UTC m=+0.093623150 container init f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 12:51:10 compute-0 podman[244753]: 2025-11-26 12:51:10.059857965 +0000 UTC m=+0.099585428 container start f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:51:10 compute-0 podman[244753]: 2025-11-26 12:51:10.062281724 +0000 UTC m=+0.102009186 container attach f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 12:51:10 compute-0 gifted_chaum[244768]: 167 167
Nov 26 12:51:10 compute-0 systemd[1]: libpod-f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724.scope: Deactivated successfully.
Nov 26 12:51:10 compute-0 podman[244753]: 2025-11-26 12:51:10.065750663 +0000 UTC m=+0.105478125 container died f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 12:51:10 compute-0 podman[244753]: 2025-11-26 12:51:09.977827849 +0000 UTC m=+0.017555332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:51:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-53cd74ac09b2639127fe1b4f3a7c747288ccab34b8f7a32441b176545d8ef8a2-merged.mount: Deactivated successfully.
Nov 26 12:51:10 compute-0 podman[244753]: 2025-11-26 12:51:10.085126195 +0000 UTC m=+0.124853658 container remove f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chaum, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:51:10 compute-0 systemd[1]: libpod-conmon-f4f61f18ba00b028bfa668dfa950ff3eda7e9e251c809df8af4f3bc26946c724.scope: Deactivated successfully.
Nov 26 12:51:10 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:10 compute-0 podman[244814]: 2025-11-26 12:51:10.215275635 +0000 UTC m=+0.034137310 container create 992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 12:51:10 compute-0 systemd[1]: Started libpod-conmon-992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8.scope.
Nov 26 12:51:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22100547e6649899de4ce968394c18c2c204aa7fe53218db8fca4fcb0079206b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22100547e6649899de4ce968394c18c2c204aa7fe53218db8fca4fcb0079206b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22100547e6649899de4ce968394c18c2c204aa7fe53218db8fca4fcb0079206b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22100547e6649899de4ce968394c18c2c204aa7fe53218db8fca4fcb0079206b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22100547e6649899de4ce968394c18c2c204aa7fe53218db8fca4fcb0079206b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:10 compute-0 podman[244814]: 2025-11-26 12:51:10.280438366 +0000 UTC m=+0.099300052 container init 992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 12:51:10 compute-0 podman[244814]: 2025-11-26 12:51:10.286297962 +0000 UTC m=+0.105159637 container start 992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 12:51:10 compute-0 podman[244814]: 2025-11-26 12:51:10.287370963 +0000 UTC m=+0.106232639 container attach 992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 12:51:10 compute-0 podman[244814]: 2025-11-26 12:51:10.19840843 +0000 UTC m=+0.017270126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:51:10 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:51:10 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:51:10 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:51:10 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:51:10 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:51:10 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:51:10 compute-0 sudo[244958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkolulywlqbmshrtwqxhytmaebbbhdjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161470.3242536-1453-126365495357858/AnsiballZ_container_config_data.py'
Nov 26 12:51:10 compute-0 sudo[244958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:51:10 compute-0 python3.9[244960]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 26 12:51:10 compute-0 sudo[244958]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:11 compute-0 sudo[245126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucovjjvraifwnwevumpxetqshqhiabel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161470.8388429-1462-192679183980015/AnsiballZ_container_config_hash.py'
Nov 26 12:51:11 compute-0 sudo[245126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:51:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:51:11 compute-0 keen_bardeen[244828]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:51:11 compute-0 keen_bardeen[244828]: --> relative data size: 1.0
Nov 26 12:51:11 compute-0 keen_bardeen[244828]: --> All data devices are unavailable
Nov 26 12:51:11 compute-0 systemd[1]: libpod-992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8.scope: Deactivated successfully.
Nov 26 12:51:11 compute-0 podman[244814]: 2025-11-26 12:51:11.128500432 +0000 UTC m=+0.947362108 container died 992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:51:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-22100547e6649899de4ce968394c18c2c204aa7fe53218db8fca4fcb0079206b-merged.mount: Deactivated successfully.
Nov 26 12:51:11 compute-0 podman[244814]: 2025-11-26 12:51:11.166277993 +0000 UTC m=+0.985139668 container remove 992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:51:11 compute-0 systemd[1]: libpod-conmon-992ffd94b85e3a7fedc43561dcbe5602d4e260e9764b3873cc92fd761f60fdd8.scope: Deactivated successfully.
Nov 26 12:51:11 compute-0 sudo[244611]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:11 compute-0 python3.9[245130]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 12:51:11 compute-0 sudo[245147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:51:11 compute-0 sudo[245126]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:11 compute-0 sudo[245147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:11 compute-0 sudo[245147]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:11 compute-0 sudo[245172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:51:11 compute-0 sudo[245172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:11 compute-0 sudo[245172]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:11 compute-0 sudo[245197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:51:11 compute-0 sudo[245197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:11 compute-0 sudo[245197]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:11 compute-0 sudo[245222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:51:11 compute-0 sudo[245222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:11 compute-0 ceph-mon[74966]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:11 compute-0 podman[245356]: 2025-11-26 12:51:11.611036677 +0000 UTC m=+0.030966862 container create d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:51:11 compute-0 systemd[1]: Started libpod-conmon-d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad.scope.
Nov 26 12:51:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:51:11 compute-0 podman[245356]: 2025-11-26 12:51:11.665827297 +0000 UTC m=+0.085757482 container init d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 12:51:11 compute-0 podman[245356]: 2025-11-26 12:51:11.670844425 +0000 UTC m=+0.090774620 container start d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 12:51:11 compute-0 podman[245356]: 2025-11-26 12:51:11.672133003 +0000 UTC m=+0.092063198 container attach d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 12:51:11 compute-0 tender_hypatia[245407]: 167 167
Nov 26 12:51:11 compute-0 systemd[1]: libpod-d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad.scope: Deactivated successfully.
Nov 26 12:51:11 compute-0 podman[245356]: 2025-11-26 12:51:11.675040974 +0000 UTC m=+0.094971170 container died d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:51:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-46332e9446e31a676ec2c94a1f227370c019288698e193ecb754a668ced3ddbb-merged.mount: Deactivated successfully.
Nov 26 12:51:11 compute-0 podman[245356]: 2025-11-26 12:51:11.695063647 +0000 UTC m=+0.114993843 container remove d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hypatia, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:51:11 compute-0 podman[245356]: 2025-11-26 12:51:11.599024765 +0000 UTC m=+0.018954980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:51:11 compute-0 systemd[1]: libpod-conmon-d21573156a0fff5cb77d6a214fbd1d64dfbf747de9e404252ea2c4771bd8baad.scope: Deactivated successfully.
Nov 26 12:51:11 compute-0 sudo[245456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xreuukfdfehnweaqiwkxljlpvnftqxhn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764161471.509077-1472-206552283182633/AnsiballZ_edpm_container_manage.py'
Nov 26 12:51:11 compute-0 sudo[245456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:51:11 compute-0 podman[245464]: 2025-11-26 12:51:11.821265647 +0000 UTC m=+0.030830957 container create 3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 12:51:11 compute-0 systemd[1]: Started libpod-conmon-3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059.scope.
Nov 26 12:51:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:51:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b208452bca47f00bd1d545580886eb180e086c2a9f5fb3e4e3380c7e015d2dcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b208452bca47f00bd1d545580886eb180e086c2a9f5fb3e4e3380c7e015d2dcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b208452bca47f00bd1d545580886eb180e086c2a9f5fb3e4e3380c7e015d2dcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b208452bca47f00bd1d545580886eb180e086c2a9f5fb3e4e3380c7e015d2dcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:11 compute-0 podman[245464]: 2025-11-26 12:51:11.876306047 +0000 UTC m=+0.085871367 container init 3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:51:11 compute-0 podman[245464]: 2025-11-26 12:51:11.88196258 +0000 UTC m=+0.091527890 container start 3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:51:11 compute-0 podman[245464]: 2025-11-26 12:51:11.883204881 +0000 UTC m=+0.092770191 container attach 3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:51:11 compute-0 podman[245464]: 2025-11-26 12:51:11.809391364 +0000 UTC m=+0.018956694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:51:11 compute-0 python3[245458]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 12:51:12 compute-0 podman[245511]: 2025-11-26 12:51:12.078359635 +0000 UTC m=+0.029711317 container create fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.build-date=20251118, tcib_managed=true, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 12:51:12 compute-0 podman[245511]: 2025-11-26 12:51:12.064153467 +0000 UTC m=+0.015505168 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 26 12:51:12 compute-0 python3[245458]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 kolla_start
Nov 26 12:51:12 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:12 compute-0 sudo[245456]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:12 compute-0 sudo[245692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-robrhtstzffwuwgibbuwxzsqhpjgsuzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161472.2975621-1480-159764183051269/AnsiballZ_stat.py'
Nov 26 12:51:12 compute-0 sudo[245692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]: {
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:     "0": [
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:         {
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "devices": [
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "/dev/loop3"
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             ],
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "lv_name": "ceph_lv0",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "lv_size": "21470642176",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "name": "ceph_lv0",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "tags": {
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.cluster_name": "ceph",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.crush_device_class": "",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.encrypted": "0",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.osd_id": "0",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.type": "block",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.vdo": "0"
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             },
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "type": "block",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "vg_name": "ceph_vg0"
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:         }
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:     ],
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:     "1": [
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:         {
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "devices": [
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "/dev/loop4"
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             ],
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "lv_name": "ceph_lv1",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "lv_size": "21470642176",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "name": "ceph_lv1",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "tags": {
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.cluster_name": "ceph",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.crush_device_class": "",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.encrypted": "0",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.osd_id": "1",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.type": "block",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.vdo": "0"
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             },
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "type": "block",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "vg_name": "ceph_vg1"
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:         }
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:     ],
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:     "2": [
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:         {
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "devices": [
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "/dev/loop5"
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             ],
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "lv_name": "ceph_lv2",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "lv_size": "21470642176",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "name": "ceph_lv2",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "tags": {
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.cluster_name": "ceph",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.crush_device_class": "",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.encrypted": "0",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.osd_id": "2",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.type": "block",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:                 "ceph.vdo": "0"
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             },
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "type": "block",
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:             "vg_name": "ceph_vg2"
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:         }
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]:     ]
Nov 26 12:51:12 compute-0 peaceful_wozniak[245478]: }
Nov 26 12:51:12 compute-0 systemd[1]: libpod-3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059.scope: Deactivated successfully.
Nov 26 12:51:12 compute-0 podman[245464]: 2025-11-26 12:51:12.526291812 +0000 UTC m=+0.735857122 container died 3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:51:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b208452bca47f00bd1d545580886eb180e086c2a9f5fb3e4e3380c7e015d2dcb-merged.mount: Deactivated successfully.
Nov 26 12:51:12 compute-0 podman[245464]: 2025-11-26 12:51:12.560279448 +0000 UTC m=+0.769844758 container remove 3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wozniak, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:51:12 compute-0 systemd[1]: libpod-conmon-3847d7ceb8a2e3b7d3df941bfba0ee42a39d92b2f3752d247a75e287d43c1059.scope: Deactivated successfully.
Nov 26 12:51:12 compute-0 sudo[245222]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:12 compute-0 sudo[245707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:51:12 compute-0 sudo[245707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:12 compute-0 python3.9[245696]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:51:12 compute-0 sudo[245707]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:12 compute-0 sudo[245692]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:12 compute-0 sudo[245733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:51:12 compute-0 sudo[245733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:12 compute-0 sudo[245733]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:12 compute-0 sudo[245765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:51:12 compute-0 sudo[245765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:12 compute-0 sudo[245765]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:12 compute-0 sudo[245808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:51:12 compute-0 sudo[245808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:13 compute-0 sudo[245996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wurjqpevdomapdnbsktpxlpjuirmffcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161472.8280158-1489-59294714040367/AnsiballZ_file.py'
Nov 26 12:51:13 compute-0 sudo[245996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:51:13 compute-0 podman[245970]: 2025-11-26 12:51:13.026490101 +0000 UTC m=+0.033682983 container create ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 12:51:13 compute-0 systemd[1]: Started libpod-conmon-ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e.scope.
Nov 26 12:51:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:51:13 compute-0 podman[245970]: 2025-11-26 12:51:13.09417209 +0000 UTC m=+0.101364993 container init ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 12:51:13 compute-0 podman[245970]: 2025-11-26 12:51:13.099531041 +0000 UTC m=+0.106723924 container start ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:51:13 compute-0 podman[245970]: 2025-11-26 12:51:13.101008546 +0000 UTC m=+0.108201430 container attach ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 12:51:13 compute-0 wizardly_lovelace[246004]: 167 167
Nov 26 12:51:13 compute-0 systemd[1]: libpod-ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e.scope: Deactivated successfully.
Nov 26 12:51:13 compute-0 podman[245970]: 2025-11-26 12:51:13.104071449 +0000 UTC m=+0.111264342 container died ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 12:51:13 compute-0 podman[245970]: 2025-11-26 12:51:13.010833707 +0000 UTC m=+0.018026610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:51:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b058cb83ef9528cbabcd806c5e023b9a1f175c5ce7f8272ce274b24ae9ea8f8-merged.mount: Deactivated successfully.
Nov 26 12:51:13 compute-0 podman[245970]: 2025-11-26 12:51:13.123566377 +0000 UTC m=+0.130759261 container remove ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 12:51:13 compute-0 systemd[1]: libpod-conmon-ff763317ba150e891d53dcc9e4a70fbeebef07c2c00ca50edff60823c2153d7e.scope: Deactivated successfully.
Nov 26 12:51:13 compute-0 python3.9[246001]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:51:13 compute-0 sudo[245996]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:13 compute-0 podman[246026]: 2025-11-26 12:51:13.252550151 +0000 UTC m=+0.032379016 container create 7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 26 12:51:13 compute-0 systemd[1]: Started libpod-conmon-7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1.scope.
Nov 26 12:51:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f189f69d15ff8c11a48d538bb12a2ac0853e33c25af77b303cd8803fd4189d30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f189f69d15ff8c11a48d538bb12a2ac0853e33c25af77b303cd8803fd4189d30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f189f69d15ff8c11a48d538bb12a2ac0853e33c25af77b303cd8803fd4189d30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f189f69d15ff8c11a48d538bb12a2ac0853e33c25af77b303cd8803fd4189d30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:13 compute-0 podman[246026]: 2025-11-26 12:51:13.308434782 +0000 UTC m=+0.088263657 container init 7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:51:13 compute-0 podman[246026]: 2025-11-26 12:51:13.316240086 +0000 UTC m=+0.096068951 container start 7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:51:13 compute-0 podman[246026]: 2025-11-26 12:51:13.317468741 +0000 UTC m=+0.097297597 container attach 7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:51:13 compute-0 podman[246026]: 2025-11-26 12:51:13.239376268 +0000 UTC m=+0.019205153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:51:13 compute-0 ceph-mon[74966]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:13 compute-0 sudo[246192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdrhkzhtxeuzjcgaqnbqfiglavmstoqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161473.2649786-1489-214859560833179/AnsiballZ_copy.py'
Nov 26 12:51:13 compute-0 sudo[246192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:51:13 compute-0 python3.9[246194]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764161473.2649786-1489-214859560833179/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 12:51:13 compute-0 sudo[246192]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:13 compute-0 sudo[246269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzmuxygnxuzyjafwugqowcxfnzjdqpuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161473.2649786-1489-214859560833179/AnsiballZ_systemd.py'
Nov 26 12:51:13 compute-0 sudo[246269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]: {
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:         "osd_id": 1,
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:         "type": "bluestore"
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:     },
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:         "osd_id": 2,
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:         "type": "bluestore"
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:     },
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:         "osd_id": 0,
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:         "type": "bluestore"
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]:     }
Nov 26 12:51:14 compute-0 elastic_lumiere[246068]: }
Nov 26 12:51:14 compute-0 systemd[1]: libpod-7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1.scope: Deactivated successfully.
Nov 26 12:51:14 compute-0 conmon[246068]: conmon 7f535d22077d7dbf988c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1.scope/container/memory.events
Nov 26 12:51:14 compute-0 podman[246026]: 2025-11-26 12:51:14.08637544 +0000 UTC m=+0.866204315 container died 7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 12:51:14 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f189f69d15ff8c11a48d538bb12a2ac0853e33c25af77b303cd8803fd4189d30-merged.mount: Deactivated successfully.
Nov 26 12:51:14 compute-0 podman[246026]: 2025-11-26 12:51:14.897891789 +0000 UTC m=+1.677720654 container remove 7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 12:51:14 compute-0 sudo[245808]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:51:14 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:51:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:51:14 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:51:14 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev ac5fd1ce-6b5a-4792-aca9-5e65d51cc242 does not exist
Nov 26 12:51:14 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev c5cc539d-1948-48c5-913b-bf7ca63388f6 does not exist
Nov 26 12:51:14 compute-0 systemd[1]: libpod-conmon-7f535d22077d7dbf988c112c0deece7bfd1b209fbf04efb42c1a5a24140593f1.scope: Deactivated successfully.
Nov 26 12:51:14 compute-0 sudo[246319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:51:14 compute-0 sudo[246319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:14 compute-0 sudo[246319]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:15 compute-0 podman[246299]: 2025-11-26 12:51:15.004749134 +0000 UTC m=+0.899908878 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 26 12:51:15 compute-0 sudo[246355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:51:15 compute-0 sudo[246355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:51:15 compute-0 sudo[246355]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:15 compute-0 python3.9[246272]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 12:51:15 compute-0 systemd[1]: Reloading.
Nov 26 12:51:15 compute-0 systemd-rc-local-generator[246403]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:51:15 compute-0 systemd-sysv-generator[246407]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:51:15 compute-0 sudo[246269]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:15 compute-0 ceph-mon[74966]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:15 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:51:15 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:51:15 compute-0 sudo[246492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sucuwzloklwvwangxsvjohzhxdgdvgmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161473.2649786-1489-214859560833179/AnsiballZ_systemd.py'
Nov 26 12:51:15 compute-0 sudo[246492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:51:15 compute-0 python3.9[246494]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 12:51:15 compute-0 systemd[1]: Reloading.
Nov 26 12:51:15 compute-0 systemd-rc-local-generator[246517]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 12:51:15 compute-0 systemd-sysv-generator[246520]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 12:51:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:51:16 compute-0 systemd[1]: Starting nova_compute container...
Nov 26 12:51:16 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:16 compute-0 podman[246534]: 2025-11-26 12:51:16.203240134 +0000 UTC m=+0.071069415 container init fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 26 12:51:16 compute-0 podman[246534]: 2025-11-26 12:51:16.208051413 +0000 UTC m=+0.075880675 container start fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 12:51:16 compute-0 podman[246534]: nova_compute
Nov 26 12:51:16 compute-0 nova_compute[246546]: + sudo -E kolla_set_configs
Nov 26 12:51:16 compute-0 systemd[1]: Started nova_compute container.
Nov 26 12:51:16 compute-0 sudo[246492]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Validating config file
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Copying service configuration files
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Deleting /etc/ceph
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Creating directory /etc/ceph
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /etc/ceph
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Writing out command to execute
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 12:51:16 compute-0 nova_compute[246546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 12:51:16 compute-0 nova_compute[246546]: ++ cat /run_command
Nov 26 12:51:16 compute-0 nova_compute[246546]: + CMD=nova-compute
Nov 26 12:51:16 compute-0 nova_compute[246546]: + ARGS=
Nov 26 12:51:16 compute-0 nova_compute[246546]: + sudo kolla_copy_cacerts
Nov 26 12:51:16 compute-0 nova_compute[246546]: + [[ ! -n '' ]]
Nov 26 12:51:16 compute-0 nova_compute[246546]: + . kolla_extend_start
Nov 26 12:51:16 compute-0 nova_compute[246546]: Running command: 'nova-compute'
Nov 26 12:51:16 compute-0 nova_compute[246546]: + echo 'Running command: '\''nova-compute'\'''
Nov 26 12:51:16 compute-0 nova_compute[246546]: + umask 0022
Nov 26 12:51:16 compute-0 nova_compute[246546]: + exec nova-compute
Nov 26 12:51:16 compute-0 python3.9[246707]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:51:17 compute-0 python3.9[246858]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:51:17 compute-0 ceph-mon[74966]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:17 compute-0 python3.9[247008]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 12:51:18 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:18 compute-0 nova_compute[246546]: 2025-11-26 12:51:18.249 246550 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 26 12:51:18 compute-0 nova_compute[246546]: 2025-11-26 12:51:18.249 246550 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 26 12:51:18 compute-0 nova_compute[246546]: 2025-11-26 12:51:18.249 246550 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 26 12:51:18 compute-0 nova_compute[246546]: 2025-11-26 12:51:18.249 246550 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 26 12:51:18 compute-0 nova_compute[246546]: 2025-11-26 12:51:18.370 246550 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:51:18 compute-0 nova_compute[246546]: 2025-11-26 12:51:18.387 246550 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:51:18 compute-0 nova_compute[246546]: 2025-11-26 12:51:18.387 246550 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 26 12:51:18 compute-0 ceph-mon[74966]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:18 compute-0 sudo[247162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvixziiirxqwfkceyifjrglklgiicvtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161478.0688865-1549-16231134839836/AnsiballZ_podman_container.py'
Nov 26 12:51:18 compute-0 sudo[247162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:51:18 compute-0 python3.9[247164]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 26 12:51:18 compute-0 sudo[247162]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:18 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 12:51:18 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 12:51:18 compute-0 nova_compute[246546]: 2025-11-26 12:51:18.902 246550 INFO nova.virt.driver [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.014 246550 INFO nova.compute.provider_config [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.028 246550 DEBUG oslo_concurrency.lockutils [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.028 246550 DEBUG oslo_concurrency.lockutils [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.028 246550 DEBUG oslo_concurrency.lockutils [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.029 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.029 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.029 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.029 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.029 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.029 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.030 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.030 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.030 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.030 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.030 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.030 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.030 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.031 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.031 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.031 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.031 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.031 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.031 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.032 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.032 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.032 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.032 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.032 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.032 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.032 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.033 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.033 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.033 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.033 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.033 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.033 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.033 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.034 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.034 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.034 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.034 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.034 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.034 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.034 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.035 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.035 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.035 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.035 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.035 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.035 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.036 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.036 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.036 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.036 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.036 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.036 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.036 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.037 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.037 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.037 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.037 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.037 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.037 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.037 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.038 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.039 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.039 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.039 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.039 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.039 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.039 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.039 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.040 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.040 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.040 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.040 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.040 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.040 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.040 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.041 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.042 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.042 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.042 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.042 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.042 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.042 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.042 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.043 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.043 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.043 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.043 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.043 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.043 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.043 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.044 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.045 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.045 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.045 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.045 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.045 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.045 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.045 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.046 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.047 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.047 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.047 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.047 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.047 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.047 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.047 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.048 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.048 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.048 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.048 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.048 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.048 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.048 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.049 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.050 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.050 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.050 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.050 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.050 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.050 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.051 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.051 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.051 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.051 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.051 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.051 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.051 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.052 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.052 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.052 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.052 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.052 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.052 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.053 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.053 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.053 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.053 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.053 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.053 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.053 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.054 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.054 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.054 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.054 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.054 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.054 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.054 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.055 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.055 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.055 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.055 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.055 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.055 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.055 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.056 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.056 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.056 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.056 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.056 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.056 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.056 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.057 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.057 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.057 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.057 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.057 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.057 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.057 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.058 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.058 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.058 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.058 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.058 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.058 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.058 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.059 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.060 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.060 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.060 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.060 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.060 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.060 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.060 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.061 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.061 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.061 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.061 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.061 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.061 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.062 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.062 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.062 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.062 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.062 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.062 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.063 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.063 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.063 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.063 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.063 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.063 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.063 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.064 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.064 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.064 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.064 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.064 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.064 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.064 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.065 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.065 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.065 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.065 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.065 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.065 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.065 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.066 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.066 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.066 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.066 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.066 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.066 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.066 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.067 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.067 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.067 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.067 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.067 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.067 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.067 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.068 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.068 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.068 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.068 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.068 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.068 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.068 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.069 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.069 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.069 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.069 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.069 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.069 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.069 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.070 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.070 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.070 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.070 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.070 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.070 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.070 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.071 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.072 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.072 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.072 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.072 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.072 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.072 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.072 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.073 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.073 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.073 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.073 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.073 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.073 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.073 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.074 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.074 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.074 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.074 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.074 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.074 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.074 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.075 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.075 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.075 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.075 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.075 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.075 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.075 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.076 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.077 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.077 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.077 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.077 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.077 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.077 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.077 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.078 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.078 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.078 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.078 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.078 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.078 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.078 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.079 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.079 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.079 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.079 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.079 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.079 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.080 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.080 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.080 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.080 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.080 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.080 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.080 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.081 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.081 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.081 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.081 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.081 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.081 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.081 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.082 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.082 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.082 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.082 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.082 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.082 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.082 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.083 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.084 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.084 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.084 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.084 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.084 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.084 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.084 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.085 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.085 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.085 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.085 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.085 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.085 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.085 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.086 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.086 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.086 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.086 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.086 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.086 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.086 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.087 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.087 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.087 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.087 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.087 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.087 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.087 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.088 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.088 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.088 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.088 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.088 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.088 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.088 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.089 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.089 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.089 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.089 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.089 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.089 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.089 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.090 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.090 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.090 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.090 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.090 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.090 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.090 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.091 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.092 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.092 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.092 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.092 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.092 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.092 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.092 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.093 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.093 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.093 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.093 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.093 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.093 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.093 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.094 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.094 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.094 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.094 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.094 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.094 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.094 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.095 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.095 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.095 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.095 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.095 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.095 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.095 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.096 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.096 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.096 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.096 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.096 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.096 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.096 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.097 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.098 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.098 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.098 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.098 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.098 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.098 246550 WARNING oslo_config.cfg [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 26 12:51:19 compute-0 nova_compute[246546]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 26 12:51:19 compute-0 nova_compute[246546]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 26 12:51:19 compute-0 nova_compute[246546]: and ``live_migration_inbound_addr`` respectively.
Nov 26 12:51:19 compute-0 nova_compute[246546]: ).  Its value may be silently ignored in the future.
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.099 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.099 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.099 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.099 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.099 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.099 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.100 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.100 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.100 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.100 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.100 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.100 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.100 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.101 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.101 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.101 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.101 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.101 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.101 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rbd_secret_uuid        = f7d7fe93-41e5-51c4-b72d-63b38686102e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.101 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.102 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.102 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.102 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.102 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.102 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.102 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.102 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.103 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.103 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.103 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.103 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.103 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.103 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.104 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.104 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.104 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.104 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.104 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.104 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.104 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.105 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.105 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.105 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.105 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.105 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.105 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.105 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.106 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.106 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.106 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.106 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.106 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.106 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.106 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.107 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.108 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.108 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.108 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.108 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.108 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.108 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.108 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.109 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.109 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.109 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.109 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.109 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.109 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.109 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.110 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.110 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.110 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.110 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.110 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.110 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.110 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.111 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.111 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.111 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.111 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.111 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.111 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.111 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.112 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.113 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.113 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.113 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.113 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.113 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.113 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.113 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.114 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.114 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.114 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.114 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.114 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.114 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.114 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.115 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.116 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.116 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.116 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.116 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.116 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.116 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.116 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.117 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.117 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.117 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.117 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.117 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.117 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.118 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.118 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.118 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.118 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.118 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.118 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.118 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.119 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.119 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.119 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.119 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.119 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.119 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.119 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.120 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.120 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.120 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.120 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.120 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.120 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.120 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.121 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.121 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.121 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.121 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.121 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.121 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.121 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.122 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.122 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.122 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.122 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.122 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.122 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.122 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.123 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.123 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.123 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.123 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.123 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.123 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.123 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.124 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.124 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.124 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.124 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.124 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.124 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.125 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.125 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.125 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.125 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.125 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.125 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.125 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.126 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.127 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.127 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.127 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.127 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.127 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.127 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.128 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.129 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.129 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.129 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.129 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.129 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.129 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.129 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.130 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.130 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.130 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.130 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.130 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.130 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.130 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.131 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.131 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.131 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.131 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.131 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.131 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.131 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.132 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.133 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.133 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.133 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.133 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.133 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.133 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.133 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.134 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.134 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.134 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.134 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.134 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.134 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.135 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.135 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.135 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.135 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.135 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.135 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.135 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.136 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.136 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.136 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.136 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.136 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.136 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.136 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.137 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.138 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.138 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.138 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.138 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.138 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.138 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.138 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.139 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.139 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.139 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.139 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.139 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.139 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.139 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.140 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.140 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.140 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.140 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.140 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.140 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.140 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.141 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.141 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.141 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.141 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.141 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.141 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.141 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.142 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.142 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.142 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.142 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.142 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.142 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.142 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.143 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.143 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.143 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.143 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.143 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.143 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.143 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.144 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.144 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.144 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.144 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.144 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.144 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.144 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.145 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.146 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.146 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.146 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.146 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.146 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.146 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.146 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.147 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.147 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.147 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.147 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.147 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.147 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.147 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.148 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.149 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.149 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.149 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.149 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.149 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.149 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.149 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.150 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.151 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.151 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.151 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.151 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.151 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.151 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.151 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.152 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.152 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.152 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.152 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.152 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.152 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.152 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.153 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.153 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.153 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.153 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.153 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.153 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.153 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.154 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.155 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.155 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.155 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.155 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.155 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.155 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.155 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.156 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.156 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.156 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.156 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.156 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.156 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.156 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.157 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.158 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.158 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.158 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.158 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.158 246550 DEBUG oslo_service.service [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.159 246550 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.172 246550 DEBUG nova.virt.libvirt.host [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.172 246550 DEBUG nova.virt.libvirt.host [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.172 246550 DEBUG nova.virt.libvirt.host [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.173 246550 DEBUG nova.virt.libvirt.host [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 26 12:51:19 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 26 12:51:19 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 26 12:51:19 compute-0 sudo[247370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tflpghkgwmbpskvmpntadxwfctkoegad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161478.9914331-1557-211310460691989/AnsiballZ_systemd.py'
Nov 26 12:51:19 compute-0 sudo[247370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.240 246550 DEBUG nova.virt.libvirt.host [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f5d79d7f9d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.242 246550 DEBUG nova.virt.libvirt.host [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f5d79d7f9d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.243 246550 INFO nova.virt.libvirt.driver [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Connection event '1' reason 'None'
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.255 246550 WARNING nova.virt.libvirt.driver [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.255 246550 DEBUG nova.virt.libvirt.volume.mount [None req-feb15756-7b00-4337-8c4d-7a0667580904 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 26 12:51:19 compute-0 python3.9[247375]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 12:51:19 compute-0 systemd[1]: Stopping nova_compute container...
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.583 246550 DEBUG oslo_concurrency.lockutils [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.583 246550 DEBUG oslo_concurrency.lockutils [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 26 12:51:19 compute-0 nova_compute[246546]: 2025-11-26 12:51:19.584 246550 DEBUG oslo_concurrency.lockutils [None req-55633665-894d-4fdf-8659-46a974b44057 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 26 12:51:20 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:20 compute-0 virtqemud[247331]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 26 12:51:20 compute-0 systemd[1]: libpod-fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86.scope: Deactivated successfully.
Nov 26 12:51:20 compute-0 virtqemud[247331]: hostname: compute-0
Nov 26 12:51:20 compute-0 systemd[1]: libpod-fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86.scope: Consumed 2.586s CPU time.
Nov 26 12:51:20 compute-0 virtqemud[247331]: End of file while reading data: Input/output error
Nov 26 12:51:20 compute-0 conmon[246546]: conmon fbc1fe4d8414fa081c74 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86.scope/container/memory.events
Nov 26 12:51:20 compute-0 podman[247390]: 2025-11-26 12:51:20.310241912 +0000 UTC m=+0.762962706 container died fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.vendor=CentOS, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 26 12:51:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86-userdata-shm.mount: Deactivated successfully.
Nov 26 12:51:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca-merged.mount: Deactivated successfully.
Nov 26 12:51:20 compute-0 podman[247390]: 2025-11-26 12:51:20.734677687 +0000 UTC m=+1.187398482 container cleanup fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:51:20 compute-0 podman[247390]: nova_compute
Nov 26 12:51:20 compute-0 podman[247422]: nova_compute
Nov 26 12:51:20 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 26 12:51:20 compute-0 systemd[1]: Stopped nova_compute container.
Nov 26 12:51:20 compute-0 systemd[1]: Starting nova_compute container...
Nov 26 12:51:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c61b91e1f3b9c9bed7a72fd2584302f6cad55b85e13bf6c628170c09b8e2ca/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:20 compute-0 podman[247431]: 2025-11-26 12:51:20.885889821 +0000 UTC m=+0.077176697 container init fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 26 12:51:20 compute-0 podman[247431]: 2025-11-26 12:51:20.893959312 +0000 UTC m=+0.085246168 container start fbc1fe4d8414fa081c74996b3909d7d09438f73005ef4925d36f111d14b00f86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 12:51:20 compute-0 podman[247431]: nova_compute
Nov 26 12:51:20 compute-0 nova_compute[247443]: + sudo -E kolla_set_configs
Nov 26 12:51:20 compute-0 systemd[1]: Started nova_compute container.
Nov 26 12:51:20 compute-0 sudo[247370]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Validating config file
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Copying service configuration files
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Deleting /etc/ceph
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Creating directory /etc/ceph
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /etc/ceph
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Writing out command to execute
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 12:51:20 compute-0 nova_compute[247443]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 12:51:20 compute-0 nova_compute[247443]: ++ cat /run_command
Nov 26 12:51:20 compute-0 nova_compute[247443]: + CMD=nova-compute
Nov 26 12:51:20 compute-0 nova_compute[247443]: + ARGS=
Nov 26 12:51:20 compute-0 nova_compute[247443]: + sudo kolla_copy_cacerts
Nov 26 12:51:21 compute-0 nova_compute[247443]: + [[ ! -n '' ]]
Nov 26 12:51:21 compute-0 nova_compute[247443]: + . kolla_extend_start
Nov 26 12:51:21 compute-0 nova_compute[247443]: Running command: 'nova-compute'
Nov 26 12:51:21 compute-0 nova_compute[247443]: + echo 'Running command: '\''nova-compute'\'''
Nov 26 12:51:21 compute-0 nova_compute[247443]: + umask 0022
Nov 26 12:51:21 compute-0 nova_compute[247443]: + exec nova-compute
Nov 26 12:51:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:51:21 compute-0 ceph-mon[74966]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:21 compute-0 sudo[247604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbjhwyuamlhjxvriskkoonuvphbunzyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764161481.0809412-1566-41605771217540/AnsiballZ_podman_container.py'
Nov 26 12:51:21 compute-0 sudo[247604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:51:21 compute-0 python3.9[247606]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 26 12:51:21 compute-0 systemd[1]: Started libpod-conmon-919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73.scope.
Nov 26 12:51:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63061608008ba0167fea61eeeb49b1e981373e0a439ce6ecff75ac95fb7d89e2/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63061608008ba0167fea61eeeb49b1e981373e0a439ce6ecff75ac95fb7d89e2/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63061608008ba0167fea61eeeb49b1e981373e0a439ce6ecff75ac95fb7d89e2/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 26 12:51:21 compute-0 podman[247625]: 2025-11-26 12:51:21.686638673 +0000 UTC m=+0.094570254 container init 919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 26 12:51:21 compute-0 podman[247625]: 2025-11-26 12:51:21.694153018 +0000 UTC m=+0.102084599 container start 919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 12:51:21 compute-0 python3.9[247606]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 26 12:51:21 compute-0 nova_compute_init[247644]: INFO:nova_statedir:Applying nova statedir ownership
Nov 26 12:51:21 compute-0 nova_compute_init[247644]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 26 12:51:21 compute-0 nova_compute_init[247644]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 26 12:51:21 compute-0 nova_compute_init[247644]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 26 12:51:21 compute-0 nova_compute_init[247644]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 26 12:51:21 compute-0 nova_compute_init[247644]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 26 12:51:21 compute-0 nova_compute_init[247644]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 26 12:51:21 compute-0 nova_compute_init[247644]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 26 12:51:21 compute-0 nova_compute_init[247644]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 26 12:51:21 compute-0 nova_compute_init[247644]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 26 12:51:21 compute-0 nova_compute_init[247644]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 26 12:51:21 compute-0 nova_compute_init[247644]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 26 12:51:21 compute-0 nova_compute_init[247644]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 26 12:51:21 compute-0 nova_compute_init[247644]: INFO:nova_statedir:Nova statedir ownership complete
Nov 26 12:51:21 compute-0 systemd[1]: libpod-919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73.scope: Deactivated successfully.
Nov 26 12:51:21 compute-0 podman[247645]: 2025-11-26 12:51:21.75461145 +0000 UTC m=+0.034053000 container died 919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:51:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73-userdata-shm.mount: Deactivated successfully.
Nov 26 12:51:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-63061608008ba0167fea61eeeb49b1e981373e0a439ce6ecff75ac95fb7d89e2-merged.mount: Deactivated successfully.
Nov 26 12:51:21 compute-0 podman[247653]: 2025-11-26 12:51:21.790751184 +0000 UTC m=+0.035015674 container cleanup 919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Nov 26 12:51:21 compute-0 systemd[1]: libpod-conmon-919277d59aea2048c4b8b971af9c276cffe3574f965720e5798921af9b487d73.scope: Deactivated successfully.
Nov 26 12:51:21 compute-0 sudo[247604]: pam_unix(sudo:session): session closed for user root
Nov 26 12:51:22 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:22 compute-0 sshd-session[217568]: Connection closed by 192.168.122.30 port 48448
Nov 26 12:51:22 compute-0 sshd-session[217565]: pam_unix(sshd:session): session closed for user zuul
Nov 26 12:51:22 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Nov 26 12:51:22 compute-0 systemd[1]: session-49.scope: Consumed 1min 43.958s CPU time.
Nov 26 12:51:22 compute-0 systemd-logind[777]: Session 49 logged out. Waiting for processes to exit.
Nov 26 12:51:22 compute-0 systemd-logind[777]: Removed session 49.
Nov 26 12:51:22 compute-0 nova_compute[247443]: 2025-11-26 12:51:22.684 247447 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 26 12:51:22 compute-0 nova_compute[247443]: 2025-11-26 12:51:22.684 247447 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 26 12:51:22 compute-0 nova_compute[247443]: 2025-11-26 12:51:22.685 247447 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 26 12:51:22 compute-0 nova_compute[247443]: 2025-11-26 12:51:22.685 247447 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 26 12:51:22 compute-0 nova_compute[247443]: 2025-11-26 12:51:22.799 247447 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:51:22 compute-0 nova_compute[247443]: 2025-11-26 12:51:22.810 247447 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:51:22 compute-0 nova_compute[247443]: 2025-11-26 12:51:22.810 247447 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.200 247447 INFO nova.virt.driver [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 26 12:51:23 compute-0 ceph-mon[74966]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.286 247447 INFO nova.compute.provider_config [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.296 247447 DEBUG oslo_concurrency.lockutils [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.296 247447 DEBUG oslo_concurrency.lockutils [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.296 247447 DEBUG oslo_concurrency.lockutils [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.297 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.297 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.297 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.297 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.297 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.297 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.298 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.298 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.298 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.298 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.298 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.298 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.298 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.299 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.299 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.299 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.299 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.299 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.299 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.299 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.300 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.300 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.300 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.300 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.300 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.300 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.300 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.301 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.301 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.301 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.301 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.301 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.301 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.301 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.302 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.302 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.302 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.302 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.302 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.302 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.302 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.303 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.303 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.303 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.303 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.303 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.303 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.304 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.304 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.304 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.304 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.304 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.304 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.304 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.305 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.305 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.305 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.305 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.305 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.305 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.305 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.306 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.307 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.307 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.307 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.307 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.307 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.307 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.307 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.308 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.308 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.308 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.308 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.308 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.308 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.308 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.309 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.309 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.309 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.309 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.309 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.309 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.309 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.310 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.311 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.311 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.311 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.311 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.311 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.311 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.311 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.312 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.313 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.313 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.313 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.313 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.313 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.313 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.313 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.314 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.315 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.315 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.315 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.315 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.315 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.315 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.315 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.316 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.316 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.316 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.316 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.316 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.316 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.316 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.317 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.317 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.317 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.317 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.317 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.317 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.317 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.318 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.318 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.318 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.318 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.318 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.318 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.318 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.319 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.320 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.320 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.320 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.320 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.320 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.320 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.320 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.321 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.321 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.321 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.321 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.321 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.321 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.321 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.322 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.322 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.322 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.322 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.322 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.322 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.322 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.323 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.323 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.323 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.323 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.323 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.323 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.323 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.324 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.324 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.324 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.324 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.324 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.324 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.324 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.325 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.326 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.326 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.326 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.326 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.326 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.326 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.326 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.327 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.327 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.327 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.327 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.327 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.327 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.327 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.328 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.328 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.328 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.328 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.328 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.328 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.328 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.329 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.329 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.329 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.329 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.329 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.329 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.329 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.330 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.330 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.330 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.330 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.330 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.330 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.330 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.331 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.332 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.332 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.332 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.332 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.332 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.332 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.332 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.333 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.333 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.333 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.333 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.333 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.333 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.333 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.334 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.334 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.334 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.334 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.334 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.334 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.334 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.335 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.336 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.336 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.336 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.336 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.336 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.336 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.336 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.337 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.337 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.337 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.337 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.337 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.337 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.337 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.338 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.338 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.338 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.338 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.338 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.338 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.338 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.339 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.339 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.339 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.339 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.339 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.339 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.339 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.340 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.341 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.341 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.341 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.341 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.341 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.341 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.341 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.342 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.342 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.342 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.342 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.342 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.342 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.342 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.343 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.344 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.344 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.344 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.344 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.344 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.344 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.344 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.345 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.345 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.345 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.345 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.345 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.345 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.346 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.346 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.346 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.346 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.346 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.346 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.347 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.347 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.347 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.347 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.347 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.347 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.348 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.348 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.348 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.348 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.348 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.348 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.348 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.349 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.350 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.350 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.350 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.350 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.350 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.350 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.350 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.351 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.351 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.351 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.351 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.351 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.351 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.351 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.352 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.352 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.352 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.352 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.352 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.352 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.352 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.353 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.353 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.353 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.353 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.353 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.353 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.353 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.354 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.355 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.355 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.355 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.355 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.355 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.355 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.355 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.356 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.356 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.356 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.356 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.356 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.356 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.356 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.357 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.358 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.358 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.358 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.358 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.358 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.358 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.358 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.359 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.360 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.360 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.360 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.360 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.360 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.360 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.360 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.361 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.361 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.361 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.361 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.361 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.361 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.361 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.362 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.362 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.362 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.362 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.362 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.362 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.362 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.363 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.363 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.363 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.363 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.363 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.363 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.363 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.364 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.365 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.365 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.365 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.365 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.365 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.365 247447 WARNING oslo_config.cfg [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 26 12:51:23 compute-0 nova_compute[247443]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 26 12:51:23 compute-0 nova_compute[247443]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 26 12:51:23 compute-0 nova_compute[247443]: and ``live_migration_inbound_addr`` respectively.
Nov 26 12:51:23 compute-0 nova_compute[247443]: ).  Its value may be silently ignored in the future.
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.366 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.366 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.366 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.366 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.366 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.366 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.366 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.367 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.367 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.367 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.367 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.367 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.367 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.367 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.368 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.368 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.368 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.368 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.368 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rbd_secret_uuid        = f7d7fe93-41e5-51c4-b72d-63b38686102e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.368 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.368 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.369 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.369 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.369 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.369 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.369 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.369 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.369 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.370 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.370 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.370 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.370 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.370 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.370 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.370 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.371 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.371 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.371 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.371 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.371 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.371 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.371 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.372 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.372 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.372 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.372 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.372 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.372 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.372 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.373 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.373 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.373 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.373 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.373 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.373 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.373 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.374 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.375 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.375 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.375 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.375 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.375 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.375 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.375 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.376 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.377 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.377 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.377 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.377 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.377 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.377 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.377 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.378 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.378 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.378 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.378 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.378 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.378 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.378 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.379 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.379 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.379 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.379 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.379 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.379 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.379 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.380 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.381 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.381 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.381 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.381 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.381 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.381 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.381 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.382 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.383 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.383 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.383 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.383 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.383 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.383 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.383 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.384 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.384 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.384 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.384 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.384 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.384 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.384 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.385 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.385 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.385 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.385 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.385 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.385 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.386 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.386 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.386 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.386 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.386 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.386 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.386 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.387 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.387 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.387 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.387 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.387 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.387 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.387 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.388 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.388 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.388 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.388 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.388 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.388 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.388 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.389 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.389 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.389 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.389 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.389 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.389 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.389 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.390 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.390 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.390 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.390 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.390 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.390 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.390 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.391 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.391 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.391 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.391 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.391 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.391 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.391 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.392 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.392 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.392 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.392 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.392 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.392 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.392 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.393 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.393 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.393 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.393 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.393 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.393 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.393 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.394 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.394 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.394 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.394 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.394 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.394 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.394 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.395 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.395 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.395 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.395 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.395 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.395 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.395 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.396 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.396 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.396 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.396 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.396 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.396 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.396 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.397 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.398 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.398 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.398 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.398 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.398 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.398 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.398 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.399 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.399 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.399 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.399 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.399 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.399 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.399 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.400 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.400 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.400 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.400 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.400 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.400 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.400 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.401 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.401 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.401 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.401 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.401 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.401 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.402 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.402 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.402 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.402 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.402 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.402 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.402 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.403 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.404 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.404 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.404 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.404 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.404 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.404 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.404 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.405 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.405 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.405 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.405 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.405 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.405 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.405 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.406 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.406 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.406 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.406 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.406 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.406 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.406 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.407 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.407 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.407 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.407 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.407 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.407 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.407 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.408 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.408 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.408 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.408 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.408 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.408 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.408 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.409 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.409 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.409 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.409 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.409 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.409 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.409 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.410 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.410 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.410 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.410 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.410 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.410 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.410 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.411 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.412 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.412 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.412 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.412 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.412 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.412 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.412 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.413 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.413 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.413 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.413 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.413 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.413 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.413 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.414 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.414 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.414 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.414 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.414 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.414 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.414 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.415 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.415 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.415 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.415 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.415 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.415 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.415 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.416 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.416 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.416 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.416 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.416 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.416 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.416 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.417 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.417 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.417 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.417 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.417 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.417 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.417 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.418 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.419 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.419 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.419 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.419 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.419 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.419 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.419 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.420 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.420 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.420 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.420 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.420 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.420 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.420 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.421 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.421 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.421 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.421 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.421 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.421 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.421 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.422 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.423 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.423 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.423 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.423 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.423 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.423 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.423 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.424 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.424 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.424 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.424 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.424 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.424 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.424 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.425 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.425 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.425 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.425 247447 DEBUG oslo_service.service [None req-977d53bc-d049-474d-9a5e-f06c2acfb259 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.426 247447 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.434 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.435 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.435 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.435 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.446 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f685bbc9490> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.449 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f685bbc9490> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.450 247447 INFO nova.virt.libvirt.driver [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Connection event '1' reason 'None'
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.453 247447 INFO nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Libvirt host capabilities <capabilities>
Nov 26 12:51:23 compute-0 nova_compute[247443]: 
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <host>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <uuid>0a08c8a3-e2a8-4364-8947-610c4936d879</uuid>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <cpu>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <arch>x86_64</arch>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model>EPYC-Milan-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <vendor>AMD</vendor>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <microcode version='167776725'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <signature family='25' model='1' stepping='1'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <maxphysaddr mode='emulate' bits='48'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='x2apic'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='tsc-deadline'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='osxsave'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='hypervisor'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='tsc_adjust'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='ospke'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='vaes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='vpclmulqdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='spec-ctrl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='stibp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='arch-capabilities'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='ssbd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='cmp_legacy'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='virt-ssbd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='lbrv'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='tsc-scale'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='vmcb-clean'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='pause-filter'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='pfthreshold'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='v-vmsave-vmload'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='vgif'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='rdctl-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='skip-l1dfl-vmentry'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='mds-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature name='pschange-mc-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <pages unit='KiB' size='4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <pages unit='KiB' size='2048'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <pages unit='KiB' size='1048576'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </cpu>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <power_management>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <suspend_mem/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </power_management>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <iommu support='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <migration_features>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <live/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <uri_transports>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <uri_transport>tcp</uri_transport>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <uri_transport>rdma</uri_transport>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </uri_transports>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </migration_features>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <topology>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <cells num='1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <cell id='0'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:           <memory unit='KiB'>7865364</memory>
Nov 26 12:51:23 compute-0 nova_compute[247443]:           <pages unit='KiB' size='4'>1966341</pages>
Nov 26 12:51:23 compute-0 nova_compute[247443]:           <pages unit='KiB' size='2048'>0</pages>
Nov 26 12:51:23 compute-0 nova_compute[247443]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 26 12:51:23 compute-0 nova_compute[247443]:           <distances>
Nov 26 12:51:23 compute-0 nova_compute[247443]:             <sibling id='0' value='10'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:           </distances>
Nov 26 12:51:23 compute-0 nova_compute[247443]:           <cpus num='4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:           </cpus>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         </cell>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </cells>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </topology>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <cache>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </cache>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <secmodel>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model>selinux</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <doi>0</doi>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </secmodel>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <secmodel>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model>dac</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <doi>0</doi>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </secmodel>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </host>
Nov 26 12:51:23 compute-0 nova_compute[247443]: 
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <guest>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <os_type>hvm</os_type>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <arch name='i686'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <wordsize>32</wordsize>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <domain type='qemu'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <domain type='kvm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </arch>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <features>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <pae/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <nonpae/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <acpi default='on' toggle='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <apic default='on' toggle='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <cpuselection/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <deviceboot/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <disksnapshot default='on' toggle='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <externalSnapshot/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </features>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </guest>
Nov 26 12:51:23 compute-0 nova_compute[247443]: 
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <guest>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <os_type>hvm</os_type>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <arch name='x86_64'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <wordsize>64</wordsize>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <domain type='qemu'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <domain type='kvm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </arch>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <features>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <acpi default='on' toggle='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <apic default='on' toggle='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <cpuselection/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <deviceboot/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <disksnapshot default='on' toggle='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <externalSnapshot/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </features>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </guest>
Nov 26 12:51:23 compute-0 nova_compute[247443]: 
Nov 26 12:51:23 compute-0 nova_compute[247443]: </capabilities>
Nov 26 12:51:23 compute-0 nova_compute[247443]: 
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.462 247447 WARNING nova.virt.libvirt.driver [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.462 247447 DEBUG nova.virt.libvirt.volume.mount [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.466 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.488 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 26 12:51:23 compute-0 nova_compute[247443]: <domainCapabilities>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <path>/usr/libexec/qemu-kvm</path>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <domain>kvm</domain>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <arch>i686</arch>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <vcpu max='240'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <iothreads supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <os supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <enum name='firmware'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <loader supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>rom</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pflash</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='readonly'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>yes</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>no</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='secure'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>no</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </loader>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </os>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <cpu>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='host-passthrough' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='hostPassthroughMigratable'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>on</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>off</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='maximum' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='maximumMigratable'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>on</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>off</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='host-model' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <vendor>AMD</vendor>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='x2apic'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='tsc-deadline'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='hypervisor'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='tsc_adjust'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vaes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vpclmulqdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='spec-ctrl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='stibp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='ssbd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='cmp_legacy'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='overflow-recov'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='succor'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='virt-ssbd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='lbrv'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='tsc-scale'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vmcb-clean'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='flushbyasid'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='pause-filter'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='pfthreshold'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vgif'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='custom' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cooperlake'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cooperlake-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cooperlake-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Denverton'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Denverton-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='EPYC-Genoa'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amd-psfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='auto-ibrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='no-nested-data-bp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='null-sel-clr-base'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='stibp-always-on'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='EPYC-Genoa-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amd-psfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='auto-ibrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='no-nested-data-bp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='null-sel-clr-base'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='stibp-always-on'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='EPYC-Milan-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amd-psfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='no-nested-data-bp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='null-sel-clr-base'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='stibp-always-on'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='GraniteRapids'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='prefetchiti'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='GraniteRapids-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='prefetchiti'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='GraniteRapids-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10-128'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10-256'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10-512'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='prefetchiti'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-noTSX'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v6'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v7'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='KnightsMill'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4fmaps'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4vnniw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512er'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512pf'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='KnightsMill-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4fmaps'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4vnniw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512er'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512pf'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G4-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tbm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G5-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tbm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SierraForest'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ne-convert'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cmpccxadd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SierraForest-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ne-convert'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cmpccxadd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='athlon'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='athlon-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='core2duo'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='core2duo-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='coreduo'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='coreduo-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='n270'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='n270-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='phenom'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='phenom-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </cpu>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <memoryBacking supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <enum name='sourceType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>file</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>anonymous</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>memfd</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </memoryBacking>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <devices>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <disk supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='diskDevice'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>disk</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>cdrom</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>floppy</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>lun</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='bus'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>ide</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>fdc</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>scsi</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>usb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>sata</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-non-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </disk>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <graphics supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vnc</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>egl-headless</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>dbus</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </graphics>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <video supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='modelType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vga</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>cirrus</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>none</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>bochs</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>ramfb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </video>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <hostdev supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='mode'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>subsystem</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='startupPolicy'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>default</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>mandatory</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>requisite</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>optional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='subsysType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>usb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pci</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>scsi</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='capsType'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='pciBackend'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </hostdev>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <rng supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-non-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendModel'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>random</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>egd</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>builtin</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </rng>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <filesystem supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='driverType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>path</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>handle</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtiofs</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </filesystem>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <tpm supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tpm-tis</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tpm-crb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendModel'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>emulator</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>external</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendVersion'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>2.0</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </tpm>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <redirdev supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='bus'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>usb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </redirdev>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <channel supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pty</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>unix</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </channel>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <crypto supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>qemu</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendModel'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>builtin</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </crypto>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <interface supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>default</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>passt</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </interface>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <panic supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>isa</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>hyperv</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </panic>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <console supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>null</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vc</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pty</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>dev</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>file</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pipe</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>stdio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>udp</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tcp</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>unix</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>qemu-vdagent</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>dbus</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </console>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </devices>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <features>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <gic supported='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <vmcoreinfo supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <genid supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <backingStoreInput supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <backup supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <async-teardown supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <ps2 supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <sev supported='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <sgx supported='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <hyperv supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='features'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>relaxed</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vapic</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>spinlocks</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vpindex</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>runtime</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>synic</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>stimer</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>reset</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vendor_id</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>frequencies</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>reenlightenment</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tlbflush</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>ipi</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>avic</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>emsr_bitmap</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>xmm_input</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <defaults>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <spinlocks>4095</spinlocks>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <stimer_direct>on</stimer_direct>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <tlbflush_direct>on</tlbflush_direct>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <tlbflush_extended>on</tlbflush_extended>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </defaults>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </hyperv>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <launchSecurity supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='sectype'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tdx</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </launchSecurity>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </features>
Nov 26 12:51:23 compute-0 nova_compute[247443]: </domainCapabilities>
Nov 26 12:51:23 compute-0 nova_compute[247443]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.503 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 26 12:51:23 compute-0 nova_compute[247443]: <domainCapabilities>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <path>/usr/libexec/qemu-kvm</path>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <domain>kvm</domain>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <arch>i686</arch>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <vcpu max='4096'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <iothreads supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <os supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <enum name='firmware'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <loader supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>rom</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pflash</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='readonly'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>yes</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>no</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='secure'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>no</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </loader>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </os>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <cpu>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='host-passthrough' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='hostPassthroughMigratable'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>on</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>off</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='maximum' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='maximumMigratable'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>on</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>off</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='host-model' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <vendor>AMD</vendor>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='x2apic'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='tsc-deadline'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='hypervisor'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='tsc_adjust'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vaes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vpclmulqdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='spec-ctrl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='stibp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='ssbd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='cmp_legacy'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='overflow-recov'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='succor'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='virt-ssbd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='lbrv'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='tsc-scale'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vmcb-clean'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='flushbyasid'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='pause-filter'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='pfthreshold'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vgif'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='custom' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cooperlake'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cooperlake-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cooperlake-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Denverton'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Denverton-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='EPYC-Genoa'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amd-psfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='auto-ibrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='no-nested-data-bp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='null-sel-clr-base'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='stibp-always-on'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='EPYC-Genoa-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amd-psfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='auto-ibrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='no-nested-data-bp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='null-sel-clr-base'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='stibp-always-on'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='EPYC-Milan-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amd-psfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='no-nested-data-bp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='null-sel-clr-base'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='stibp-always-on'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='GraniteRapids'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='prefetchiti'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='GraniteRapids-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='prefetchiti'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='GraniteRapids-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10-128'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10-256'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10-512'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='prefetchiti'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-noTSX'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v6'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v7'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='KnightsMill'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4fmaps'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4vnniw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512er'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512pf'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='KnightsMill-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4fmaps'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4vnniw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512er'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512pf'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G4-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tbm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G5-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tbm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SierraForest'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ne-convert'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cmpccxadd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SierraForest-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ne-convert'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cmpccxadd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='athlon'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='athlon-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='core2duo'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='core2duo-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='coreduo'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='coreduo-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='n270'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='n270-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='phenom'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='phenom-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </cpu>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <memoryBacking supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <enum name='sourceType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>file</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>anonymous</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>memfd</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </memoryBacking>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <devices>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <disk supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='diskDevice'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>disk</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>cdrom</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>floppy</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>lun</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='bus'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>fdc</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>scsi</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>usb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>sata</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-non-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </disk>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <graphics supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vnc</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>egl-headless</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>dbus</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </graphics>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <video supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='modelType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vga</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>cirrus</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>none</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>bochs</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>ramfb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </video>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <hostdev supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='mode'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>subsystem</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='startupPolicy'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>default</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>mandatory</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>requisite</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>optional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='subsysType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>usb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pci</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>scsi</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='capsType'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='pciBackend'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </hostdev>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <rng supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-non-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendModel'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>random</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>egd</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>builtin</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </rng>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <filesystem supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='driverType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>path</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>handle</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtiofs</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </filesystem>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <tpm supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tpm-tis</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tpm-crb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendModel'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>emulator</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>external</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendVersion'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>2.0</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </tpm>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <redirdev supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='bus'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>usb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </redirdev>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <channel supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pty</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>unix</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </channel>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <crypto supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>qemu</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendModel'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>builtin</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </crypto>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <interface supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>default</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>passt</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </interface>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <panic supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>isa</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>hyperv</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </panic>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <console supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>null</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vc</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pty</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>dev</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>file</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pipe</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>stdio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>udp</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tcp</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>unix</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>qemu-vdagent</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>dbus</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </console>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </devices>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <features>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <gic supported='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <vmcoreinfo supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <genid supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <backingStoreInput supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <backup supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <async-teardown supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <ps2 supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <sev supported='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <sgx supported='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <hyperv supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='features'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>relaxed</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vapic</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>spinlocks</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vpindex</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>runtime</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>synic</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>stimer</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>reset</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vendor_id</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>frequencies</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>reenlightenment</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tlbflush</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>ipi</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>avic</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>emsr_bitmap</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>xmm_input</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <defaults>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <spinlocks>4095</spinlocks>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <stimer_direct>on</stimer_direct>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <tlbflush_direct>on</tlbflush_direct>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <tlbflush_extended>on</tlbflush_extended>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </defaults>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </hyperv>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <launchSecurity supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='sectype'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tdx</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </launchSecurity>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </features>
Nov 26 12:51:23 compute-0 nova_compute[247443]: </domainCapabilities>
Nov 26 12:51:23 compute-0 nova_compute[247443]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.504 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.508 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 26 12:51:23 compute-0 nova_compute[247443]: <domainCapabilities>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <path>/usr/libexec/qemu-kvm</path>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <domain>kvm</domain>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <arch>x86_64</arch>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <vcpu max='240'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <iothreads supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <os supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <enum name='firmware'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <loader supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>rom</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pflash</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='readonly'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>yes</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>no</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='secure'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>no</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </loader>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </os>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <cpu>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='host-passthrough' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='hostPassthroughMigratable'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>on</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>off</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='maximum' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='maximumMigratable'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>on</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>off</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='host-model' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <vendor>AMD</vendor>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='x2apic'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='tsc-deadline'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='hypervisor'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='tsc_adjust'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vaes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vpclmulqdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='spec-ctrl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='stibp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='ssbd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='cmp_legacy'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='overflow-recov'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='succor'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='virt-ssbd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='lbrv'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='tsc-scale'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vmcb-clean'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='flushbyasid'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='pause-filter'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='pfthreshold'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vgif'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='custom' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cooperlake'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cooperlake-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cooperlake-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Denverton'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Denverton-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='EPYC-Genoa'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amd-psfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='auto-ibrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='no-nested-data-bp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='null-sel-clr-base'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='stibp-always-on'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='EPYC-Genoa-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amd-psfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='auto-ibrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='no-nested-data-bp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='null-sel-clr-base'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='stibp-always-on'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='EPYC-Milan-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amd-psfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='no-nested-data-bp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='null-sel-clr-base'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='stibp-always-on'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='GraniteRapids'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='prefetchiti'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='GraniteRapids-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='prefetchiti'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='GraniteRapids-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10-128'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10-256'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10-512'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='prefetchiti'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-noTSX'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v6'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v7'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='KnightsMill'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4fmaps'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4vnniw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512er'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512pf'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='KnightsMill-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4fmaps'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4vnniw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512er'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512pf'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G4-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tbm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G5-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tbm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SierraForest'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ne-convert'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cmpccxadd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SierraForest-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ne-convert'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cmpccxadd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='athlon'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='athlon-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='core2duo'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='core2duo-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='coreduo'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='coreduo-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='n270'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='n270-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='phenom'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='phenom-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </cpu>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <memoryBacking supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <enum name='sourceType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>file</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>anonymous</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>memfd</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </memoryBacking>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <devices>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <disk supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='diskDevice'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>disk</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>cdrom</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>floppy</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>lun</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='bus'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>ide</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>fdc</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>scsi</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>usb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>sata</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-non-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </disk>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <graphics supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vnc</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>egl-headless</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>dbus</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </graphics>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <video supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='modelType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vga</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>cirrus</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>none</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>bochs</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>ramfb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </video>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <hostdev supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='mode'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>subsystem</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='startupPolicy'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>default</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>mandatory</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>requisite</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>optional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='subsysType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>usb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pci</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>scsi</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='capsType'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='pciBackend'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </hostdev>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <rng supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-non-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendModel'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>random</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>egd</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>builtin</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </rng>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <filesystem supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='driverType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>path</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>handle</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtiofs</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </filesystem>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <tpm supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tpm-tis</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tpm-crb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendModel'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>emulator</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>external</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendVersion'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>2.0</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </tpm>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <redirdev supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='bus'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>usb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </redirdev>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <channel supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pty</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>unix</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </channel>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <crypto supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>qemu</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendModel'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>builtin</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </crypto>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <interface supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>default</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>passt</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </interface>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <panic supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>isa</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>hyperv</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </panic>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <console supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>null</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vc</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pty</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>dev</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>file</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pipe</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>stdio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>udp</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tcp</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>unix</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>qemu-vdagent</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>dbus</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </console>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </devices>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <features>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <gic supported='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <vmcoreinfo supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <genid supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <backingStoreInput supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <backup supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <async-teardown supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <ps2 supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <sev supported='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <sgx supported='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <hyperv supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='features'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>relaxed</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vapic</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>spinlocks</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vpindex</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>runtime</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>synic</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>stimer</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>reset</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vendor_id</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>frequencies</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>reenlightenment</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tlbflush</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>ipi</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>avic</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>emsr_bitmap</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>xmm_input</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <defaults>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <spinlocks>4095</spinlocks>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <stimer_direct>on</stimer_direct>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <tlbflush_direct>on</tlbflush_direct>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <tlbflush_extended>on</tlbflush_extended>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </defaults>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </hyperv>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <launchSecurity supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='sectype'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tdx</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </launchSecurity>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </features>
Nov 26 12:51:23 compute-0 nova_compute[247443]: </domainCapabilities>
Nov 26 12:51:23 compute-0 nova_compute[247443]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.549 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 26 12:51:23 compute-0 nova_compute[247443]: <domainCapabilities>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <path>/usr/libexec/qemu-kvm</path>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <domain>kvm</domain>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <arch>x86_64</arch>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <vcpu max='4096'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <iothreads supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <os supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <enum name='firmware'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>efi</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <loader supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>rom</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pflash</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='readonly'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>yes</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>no</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='secure'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>yes</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>no</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </loader>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </os>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <cpu>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='host-passthrough' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='hostPassthroughMigratable'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>on</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>off</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='maximum' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='maximumMigratable'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>on</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>off</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='host-model' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model fallback='forbid'>EPYC-Milan</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <vendor>AMD</vendor>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <maxphysaddr mode='passthrough' limit='48'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='x2apic'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='tsc-deadline'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='hypervisor'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='tsc_adjust'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vaes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vpclmulqdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='spec-ctrl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='stibp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='ssbd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='cmp_legacy'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='overflow-recov'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='succor'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='virt-ssbd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='lbrv'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='tsc-scale'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vmcb-clean'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='flushbyasid'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='pause-filter'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='pfthreshold'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='v-vmsave-vmload'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='vgif'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <mode name='custom' supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Broadwell-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cascadelake-Server-v5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cooperlake'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cooperlake-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Cooperlake-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Denverton'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Denverton-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='EPYC-Genoa'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amd-psfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='auto-ibrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='no-nested-data-bp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='null-sel-clr-base'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='stibp-always-on'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='EPYC-Genoa-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amd-psfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='auto-ibrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='no-nested-data-bp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='null-sel-clr-base'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='stibp-always-on'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='EPYC-Milan-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amd-psfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='no-nested-data-bp'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='null-sel-clr-base'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='stibp-always-on'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='GraniteRapids'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='prefetchiti'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='GraniteRapids-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='prefetchiti'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='GraniteRapids-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10-128'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10-256'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx10-512'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='prefetchiti'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Haswell-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-noTSX'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v6'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Icelake-Server-v7'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='KnightsMill'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4fmaps'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4vnniw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512er'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512pf'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='KnightsMill-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4fmaps'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-4vnniw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512er'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512pf'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G4-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tbm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Opteron_G5-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fma4'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tbm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xop'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SapphireRapids-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='amx-tile'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-bf16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-fp16'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512-vpopcntdq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bitalg'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vbmi2'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrc'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fzrm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='la57'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='taa-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='tsx-ldtrk'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='xfd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SierraForest'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ne-convert'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cmpccxadd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='SierraForest-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ifma'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-ne-convert'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx-vnni-int8'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='bus-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cmpccxadd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fbsdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='fsrs'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ibrs-all'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mcdt-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='pbrsb-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='psdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='sbdr-ssdp-no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='serialize'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Client-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='hle'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='rtm'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Skylake-Server-v5'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512bw'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512cd'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512dq'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512f'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='avx512vl'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='mpx'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v2'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v3'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='core-capability'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='split-lock-detect'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='Snowridge-v4'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='cldemote'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='gfni'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdir64b'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='movdiri'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='athlon'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='athlon-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='core2duo'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='core2duo-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='coreduo'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='coreduo-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='n270'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='n270-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='ss'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='phenom'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <blockers model='phenom-v1'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnow'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <feature name='3dnowext'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </blockers>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </mode>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </cpu>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <memoryBacking supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <enum name='sourceType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>file</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>anonymous</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <value>memfd</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </memoryBacking>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <devices>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <disk supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='diskDevice'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>disk</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>cdrom</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>floppy</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>lun</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='bus'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>fdc</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>scsi</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>usb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>sata</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-non-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </disk>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <graphics supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vnc</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>egl-headless</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>dbus</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </graphics>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <video supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='modelType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vga</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>cirrus</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>none</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>bochs</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>ramfb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </video>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <hostdev supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='mode'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>subsystem</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='startupPolicy'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>default</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>mandatory</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>requisite</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>optional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='subsysType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>usb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pci</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>scsi</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='capsType'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='pciBackend'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </hostdev>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <rng supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtio-non-transitional</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendModel'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>random</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>egd</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>builtin</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </rng>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <filesystem supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='driverType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>path</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>handle</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>virtiofs</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </filesystem>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <tpm supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tpm-tis</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tpm-crb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendModel'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>emulator</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>external</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendVersion'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>2.0</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </tpm>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <redirdev supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='bus'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>usb</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </redirdev>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <channel supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pty</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>unix</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </channel>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <crypto supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>qemu</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendModel'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>builtin</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </crypto>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <interface supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='backendType'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>default</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>passt</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </interface>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <panic supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='model'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>isa</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>hyperv</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </panic>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <console supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='type'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>null</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vc</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pty</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>dev</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>file</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>pipe</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>stdio</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>udp</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tcp</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>unix</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>qemu-vdagent</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>dbus</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </console>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </devices>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   <features>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <gic supported='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <vmcoreinfo supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <genid supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <backingStoreInput supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <backup supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <async-teardown supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <ps2 supported='yes'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <sev supported='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <sgx supported='no'/>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <hyperv supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='features'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>relaxed</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vapic</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>spinlocks</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vpindex</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>runtime</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>synic</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>stimer</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>reset</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>vendor_id</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>frequencies</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>reenlightenment</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tlbflush</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>ipi</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>avic</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>emsr_bitmap</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>xmm_input</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <defaults>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <spinlocks>4095</spinlocks>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <stimer_direct>on</stimer_direct>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <tlbflush_direct>on</tlbflush_direct>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <tlbflush_extended>on</tlbflush_extended>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </defaults>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </hyperv>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     <launchSecurity supported='yes'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       <enum name='sectype'>
Nov 26 12:51:23 compute-0 nova_compute[247443]:         <value>tdx</value>
Nov 26 12:51:23 compute-0 nova_compute[247443]:       </enum>
Nov 26 12:51:23 compute-0 nova_compute[247443]:     </launchSecurity>
Nov 26 12:51:23 compute-0 nova_compute[247443]:   </features>
Nov 26 12:51:23 compute-0 nova_compute[247443]: </domainCapabilities>
Nov 26 12:51:23 compute-0 nova_compute[247443]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.600 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.601 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.601 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.601 247447 INFO nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Secure Boot support detected
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.602 247447 INFO nova.virt.libvirt.driver [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.602 247447 INFO nova.virt.libvirt.driver [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.610 247447 DEBUG nova.virt.libvirt.driver [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.634 247447 INFO nova.virt.node [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Determined node identity b5f91a62-c356-4895-a9c1-523d85f8751b from /var/lib/nova/compute_id
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.644 247447 WARNING nova.compute.manager [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Compute nodes ['b5f91a62-c356-4895-a9c1-523d85f8751b'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.677 247447 INFO nova.compute.manager [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.697 247447 WARNING nova.compute.manager [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.697 247447 DEBUG oslo_concurrency.lockutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.697 247447 DEBUG oslo_concurrency.lockutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.698 247447 DEBUG oslo_concurrency.lockutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.698 247447 DEBUG nova.compute.resource_tracker [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 12:51:23 compute-0 nova_compute[247443]: 2025-11-26 12:51:23.698 247447 DEBUG oslo_concurrency.processutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:51:24 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 12:51:24 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1755524487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:51:24 compute-0 nova_compute[247443]: 2025-11-26 12:51:24.041 247447 DEBUG oslo_concurrency.processutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:51:24 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 26 12:51:24 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 26 12:51:24 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:24 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1755524487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:51:24 compute-0 nova_compute[247443]: 2025-11-26 12:51:24.434 247447 WARNING nova.virt.libvirt.driver [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 12:51:24 compute-0 nova_compute[247443]: 2025-11-26 12:51:24.435 247447 DEBUG nova.compute.resource_tracker [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5210MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 12:51:24 compute-0 nova_compute[247443]: 2025-11-26 12:51:24.435 247447 DEBUG oslo_concurrency.lockutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:51:24 compute-0 nova_compute[247443]: 2025-11-26 12:51:24.436 247447 DEBUG oslo_concurrency.lockutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:51:24 compute-0 nova_compute[247443]: 2025-11-26 12:51:24.447 247447 WARNING nova.compute.resource_tracker [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] No compute node record for compute-0.ctlplane.example.com:b5f91a62-c356-4895-a9c1-523d85f8751b: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host b5f91a62-c356-4895-a9c1-523d85f8751b could not be found.
Nov 26 12:51:24 compute-0 nova_compute[247443]: 2025-11-26 12:51:24.460 247447 INFO nova.compute.resource_tracker [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: b5f91a62-c356-4895-a9c1-523d85f8751b
Nov 26 12:51:24 compute-0 nova_compute[247443]: 2025-11-26 12:51:24.504 247447 DEBUG nova.compute.resource_tracker [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 12:51:24 compute-0 nova_compute[247443]: 2025-11-26 12:51:24.504 247447 DEBUG nova.compute.resource_tracker [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 12:51:25 compute-0 ceph-mon[74966]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:25 compute-0 nova_compute[247443]: 2025-11-26 12:51:25.238 247447 INFO nova.scheduler.client.report [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] [req-53ab3889-e68c-4a11-9d00-87662e78ad43] Created resource provider record via placement API for resource provider with UUID b5f91a62-c356-4895-a9c1-523d85f8751b and name compute-0.ctlplane.example.com.
Nov 26 12:51:25 compute-0 nova_compute[247443]: 2025-11-26 12:51:25.557 247447 DEBUG oslo_concurrency.processutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:51:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 12:51:25 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3933894025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:51:25 compute-0 nova_compute[247443]: 2025-11-26 12:51:25.882 247447 DEBUG oslo_concurrency.processutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:51:25 compute-0 nova_compute[247443]: 2025-11-26 12:51:25.886 247447 DEBUG nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 26 12:51:25 compute-0 nova_compute[247443]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 26 12:51:25 compute-0 nova_compute[247443]: 2025-11-26 12:51:25.887 247447 INFO nova.virt.libvirt.host [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] kernel doesn't support AMD SEV
Nov 26 12:51:25 compute-0 nova_compute[247443]: 2025-11-26 12:51:25.888 247447 DEBUG nova.compute.provider_tree [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Updating inventory in ProviderTree for provider b5f91a62-c356-4895-a9c1-523d85f8751b with inventory: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 26 12:51:25 compute-0 nova_compute[247443]: 2025-11-26 12:51:25.888 247447 DEBUG nova.virt.libvirt.driver [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 26 12:51:25 compute-0 nova_compute[247443]: 2025-11-26 12:51:25.924 247447 DEBUG nova.scheduler.client.report [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Updated inventory for provider b5f91a62-c356-4895-a9c1-523d85f8751b with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 26 12:51:25 compute-0 nova_compute[247443]: 2025-11-26 12:51:25.924 247447 DEBUG nova.compute.provider_tree [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Updating resource provider b5f91a62-c356-4895-a9c1-523d85f8751b generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 26 12:51:25 compute-0 nova_compute[247443]: 2025-11-26 12:51:25.925 247447 DEBUG nova.compute.provider_tree [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Updating inventory in ProviderTree for provider b5f91a62-c356-4895-a9c1-523d85f8751b with inventory: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 26 12:51:25 compute-0 nova_compute[247443]: 2025-11-26 12:51:25.986 247447 DEBUG nova.compute.provider_tree [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Updating resource provider b5f91a62-c356-4895-a9c1-523d85f8751b generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 26 12:51:26 compute-0 nova_compute[247443]: 2025-11-26 12:51:26.001 247447 DEBUG nova.compute.resource_tracker [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 12:51:26 compute-0 nova_compute[247443]: 2025-11-26 12:51:26.002 247447 DEBUG oslo_concurrency.lockutils [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:51:26 compute-0 nova_compute[247443]: 2025-11-26 12:51:26.002 247447 DEBUG nova.service [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 26 12:51:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:51:26 compute-0 nova_compute[247443]: 2025-11-26 12:51:26.042 247447 DEBUG nova.service [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 26 12:51:26 compute-0 nova_compute[247443]: 2025-11-26 12:51:26.042 247447 DEBUG nova.servicegroup.drivers.db [None req-216150c8-cfbd-4b09-a4ce-3953308ac276 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 26 12:51:26 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:26 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3933894025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:51:27 compute-0 ceph-mon[74966]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:28 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:29 compute-0 ceph-mon[74966]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:30 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:51:31 compute-0 ceph-mon[74966]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.236046) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161491236082, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 251, "total_data_size": 3484318, "memory_usage": 3542968, "flush_reason": "Manual Compaction"}
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161491246314, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3398356, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9678, "largest_seqno": 11721, "table_properties": {"data_size": 3389143, "index_size": 5835, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17916, "raw_average_key_size": 19, "raw_value_size": 3370788, "raw_average_value_size": 3663, "num_data_blocks": 265, "num_entries": 920, "num_filter_entries": 920, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764161263, "oldest_key_time": 1764161263, "file_creation_time": 1764161491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 10304 microseconds, and 8603 cpu microseconds.
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.246351) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3398356 bytes OK
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.246370) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.246792) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.246804) EVENT_LOG_v1 {"time_micros": 1764161491246800, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.246819) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3475783, prev total WAL file size 3475783, number of live WAL files 2.
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.253087) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3318KB)], [26(6078KB)]
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161491253127, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9623075, "oldest_snapshot_seqno": -1}
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3687 keys, 8023330 bytes, temperature: kUnknown
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161491271152, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8023330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7994775, "index_size": 18205, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88547, "raw_average_key_size": 24, "raw_value_size": 7924360, "raw_average_value_size": 2149, "num_data_blocks": 791, "num_entries": 3687, "num_filter_entries": 3687, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160613, "oldest_key_time": 0, "file_creation_time": 1764161491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.271385) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8023330 bytes
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.271844) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 531.8 rd, 443.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 5.9 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.2) write-amplify(2.4) OK, records in: 4201, records dropped: 514 output_compression: NoCompression
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.271860) EVENT_LOG_v1 {"time_micros": 1764161491271852, "job": 10, "event": "compaction_finished", "compaction_time_micros": 18094, "compaction_time_cpu_micros": 14920, "output_level": 6, "num_output_files": 1, "total_output_size": 8023330, "num_input_records": 4201, "num_output_records": 3687, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161491272326, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161491273014, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.253013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.273052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.273055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.273056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.273057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:51:31 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:51:31.273059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:51:32 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:33 compute-0 ceph-mon[74966]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:34 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:35 compute-0 ceph-mon[74966]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:51:35
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', '.mgr']
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:51:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:51:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:51:36 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:37 compute-0 ceph-mon[74966]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:38 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:39 compute-0 ceph-mon[74966]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:39 compute-0 podman[247794]: 2025-11-26 12:51:39.875336094 +0000 UTC m=+0.041402353 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 12:51:39 compute-0 podman[247795]: 2025-11-26 12:51:39.879551951 +0000 UTC m=+0.045673184 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 12:51:40 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:51:41 compute-0 ceph-mon[74966]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 12:51:41 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/716650850' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:51:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 12:51:41 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/716650850' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:51:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 12:51:41 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2032296408' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:51:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 12:51:41 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2032296408' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:51:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 12:51:41 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3568356903' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:51:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 12:51:41 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3568356903' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:51:42 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:42 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/716650850' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:51:42 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/716650850' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:51:42 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/2032296408' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:51:42 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/2032296408' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:51:42 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/3568356903' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:51:42 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/3568356903' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:51:43 compute-0 ceph-mon[74966]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:44 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:51:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:51:45 compute-0 ceph-mon[74966]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:45 compute-0 podman[247826]: 2025-11-26 12:51:45.885250679 +0000 UTC m=+0.053276596 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 12:51:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:51:46 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:47 compute-0 ceph-mon[74966]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:48 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:49 compute-0 nova_compute[247443]: 2025-11-26 12:51:49.045 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:51:49 compute-0 nova_compute[247443]: 2025-11-26 12:51:49.061 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:51:49 compute-0 ceph-mon[74966]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:50 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:51:51 compute-0 ceph-mon[74966]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:52 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:53 compute-0 ceph-mon[74966]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:54 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:55 compute-0 ceph-mon[74966]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:51:56 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:57 compute-0 ceph-mon[74966]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:58 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:51:59 compute-0 ceph-mon[74966]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:00 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:52:01 compute-0 ceph-mon[74966]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:52:01.727 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:52:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:52:01.727 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:52:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:52:01.727 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:52:02 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:03 compute-0 ceph-mon[74966]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:04 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:05 compute-0 ceph-mon[74966]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 26 12:52:05 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3603071237' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 26 12:52:05 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14335 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 26 12:52:05 compute-0 ceph-mgr[75236]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 12:52:05 compute-0 ceph-mgr[75236]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 12:52:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:52:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:52:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:52:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:52:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:52:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:52:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:52:06 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:06 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/3603071237' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 26 12:52:06 compute-0 ceph-mon[74966]: from='client.14335 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 26 12:52:07 compute-0 ceph-mon[74966]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:08 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:09 compute-0 ceph-mon[74966]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:10 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:10 compute-0 podman[247849]: 2025-11-26 12:52:10.885318484 +0000 UTC m=+0.043832652 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:52:10 compute-0 podman[247850]: 2025-11-26 12:52:10.905794035 +0000 UTC m=+0.064462094 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Nov 26 12:52:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:52:11 compute-0 ceph-mon[74966]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:12 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:13 compute-0 ceph-mon[74966]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:14 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:15 compute-0 sudo[247884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:52:15 compute-0 sudo[247884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:15 compute-0 sudo[247884]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:15 compute-0 sudo[247909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:52:15 compute-0 sudo[247909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:15 compute-0 sudo[247909]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:15 compute-0 sudo[247934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:52:15 compute-0 sudo[247934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:15 compute-0 sudo[247934]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:15 compute-0 sudo[247959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:52:15 compute-0 sudo[247959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:15 compute-0 ceph-mon[74966]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:15 compute-0 sudo[247959]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:52:15 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:52:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:52:15 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:52:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:52:15 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:52:15 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 3cdb41f1-c070-4983-9f29-bbe21b71db68 does not exist
Nov 26 12:52:15 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 1dff752f-0580-40c3-892a-2de40f805ecd does not exist
Nov 26 12:52:15 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 1270e570-7e15-493b-963c-fa97794a2a6a does not exist
Nov 26 12:52:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:52:15 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:52:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:52:15 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:52:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:52:15 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:52:15 compute-0 sudo[248013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:52:15 compute-0 sudo[248013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:15 compute-0 sudo[248013]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:15 compute-0 sudo[248038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:52:15 compute-0 sudo[248038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:15 compute-0 sudo[248038]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:15 compute-0 sudo[248063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:52:15 compute-0 sudo[248063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:15 compute-0 sudo[248063]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:15 compute-0 sudo[248088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:52:15 compute-0 sudo[248088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:16 compute-0 podman[248144]: 2025-11-26 12:52:16.020708559 +0000 UTC m=+0.027436318 container create 5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_visvesvaraya, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:52:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:52:16 compute-0 systemd[1]: Started libpod-conmon-5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953.scope.
Nov 26 12:52:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:52:16 compute-0 podman[248144]: 2025-11-26 12:52:16.07147253 +0000 UTC m=+0.078200310 container init 5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_visvesvaraya, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:52:16 compute-0 podman[248144]: 2025-11-26 12:52:16.076807407 +0000 UTC m=+0.083535167 container start 5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 26 12:52:16 compute-0 podman[248144]: 2025-11-26 12:52:16.078132518 +0000 UTC m=+0.084860278 container attach 5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_visvesvaraya, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 12:52:16 compute-0 systemd[1]: libpod-5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953.scope: Deactivated successfully.
Nov 26 12:52:16 compute-0 elated_visvesvaraya[248159]: 167 167
Nov 26 12:52:16 compute-0 conmon[248159]: conmon 5fec31a6ed88ffc68bbe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953.scope/container/memory.events
Nov 26 12:52:16 compute-0 podman[248144]: 2025-11-26 12:52:16.081574745 +0000 UTC m=+0.088302504 container died 5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_visvesvaraya, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:52:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-869cb5fe09c88caeef62029997f7551edb1fcb2ebd2f8b46011c5416d1d705be-merged.mount: Deactivated successfully.
Nov 26 12:52:16 compute-0 podman[248144]: 2025-11-26 12:52:16.103201979 +0000 UTC m=+0.109929739 container remove 5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_visvesvaraya, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:52:16 compute-0 podman[248144]: 2025-11-26 12:52:16.008890055 +0000 UTC m=+0.015617825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:52:16 compute-0 systemd[1]: libpod-conmon-5fec31a6ed88ffc68bbe50e2117a0d0fc5b35349b2dba4aa544ea11f43bba953.scope: Deactivated successfully.
Nov 26 12:52:16 compute-0 podman[248155]: 2025-11-26 12:52:16.11940094 +0000 UTC m=+0.074730601 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 26 12:52:16 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:16 compute-0 podman[248203]: 2025-11-26 12:52:16.227555386 +0000 UTC m=+0.028470801 container create 20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 12:52:16 compute-0 systemd[1]: Started libpod-conmon-20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9.scope.
Nov 26 12:52:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d509d298b120b10167783dbcfe145eb23ee8fc39799c66cd8870ba8540311b67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d509d298b120b10167783dbcfe145eb23ee8fc39799c66cd8870ba8540311b67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d509d298b120b10167783dbcfe145eb23ee8fc39799c66cd8870ba8540311b67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d509d298b120b10167783dbcfe145eb23ee8fc39799c66cd8870ba8540311b67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d509d298b120b10167783dbcfe145eb23ee8fc39799c66cd8870ba8540311b67/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:52:16 compute-0 podman[248203]: 2025-11-26 12:52:16.290124968 +0000 UTC m=+0.091040382 container init 20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 12:52:16 compute-0 podman[248203]: 2025-11-26 12:52:16.294780613 +0000 UTC m=+0.095696017 container start 20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:52:16 compute-0 podman[248203]: 2025-11-26 12:52:16.296105093 +0000 UTC m=+0.097020507 container attach 20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:52:16 compute-0 podman[248203]: 2025-11-26 12:52:16.216557331 +0000 UTC m=+0.017472756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:52:16 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:52:16 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:52:16 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:52:16 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:52:16 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:52:16 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:52:17 compute-0 optimistic_neumann[248216]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:52:17 compute-0 optimistic_neumann[248216]: --> relative data size: 1.0
Nov 26 12:52:17 compute-0 optimistic_neumann[248216]: --> All data devices are unavailable
Nov 26 12:52:17 compute-0 systemd[1]: libpod-20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9.scope: Deactivated successfully.
Nov 26 12:52:17 compute-0 podman[248203]: 2025-11-26 12:52:17.134061142 +0000 UTC m=+0.934976566 container died 20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 12:52:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d509d298b120b10167783dbcfe145eb23ee8fc39799c66cd8870ba8540311b67-merged.mount: Deactivated successfully.
Nov 26 12:52:17 compute-0 podman[248203]: 2025-11-26 12:52:17.170216895 +0000 UTC m=+0.971132308 container remove 20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:52:17 compute-0 systemd[1]: libpod-conmon-20f0180f96303c3a5bd5e24c4d3f3da5527083e91676b6fe91f3123c550590a9.scope: Deactivated successfully.
Nov 26 12:52:17 compute-0 sudo[248088]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:17 compute-0 sudo[248255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:52:17 compute-0 sudo[248255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:17 compute-0 sudo[248255]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:17 compute-0 sudo[248280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:52:17 compute-0 sudo[248280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:17 compute-0 sudo[248280]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:17 compute-0 sudo[248305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:52:17 compute-0 sudo[248305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:17 compute-0 sudo[248305]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:17 compute-0 ceph-mon[74966]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:17 compute-0 sudo[248330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:52:17 compute-0 sudo[248330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:17 compute-0 podman[248387]: 2025-11-26 12:52:17.605897935 +0000 UTC m=+0.028852371 container create 1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:52:17 compute-0 systemd[1]: Started libpod-conmon-1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af.scope.
Nov 26 12:52:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:52:17 compute-0 podman[248387]: 2025-11-26 12:52:17.645954449 +0000 UTC m=+0.068908885 container init 1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:52:17 compute-0 podman[248387]: 2025-11-26 12:52:17.651555339 +0000 UTC m=+0.074509775 container start 1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 12:52:17 compute-0 podman[248387]: 2025-11-26 12:52:17.652858909 +0000 UTC m=+0.075813365 container attach 1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 12:52:17 compute-0 epic_williamson[248400]: 167 167
Nov 26 12:52:17 compute-0 systemd[1]: libpod-1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af.scope: Deactivated successfully.
Nov 26 12:52:17 compute-0 conmon[248400]: conmon 1811cfeae0cc8bcdae11 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af.scope/container/memory.events
Nov 26 12:52:17 compute-0 podman[248387]: 2025-11-26 12:52:17.655576728 +0000 UTC m=+0.078531165 container died 1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:52:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-387392830b0ec1a4fdf774168af941e07230d1f845a0b0231e4735a232a8ad86-merged.mount: Deactivated successfully.
Nov 26 12:52:17 compute-0 podman[248387]: 2025-11-26 12:52:17.680027001 +0000 UTC m=+0.102981437 container remove 1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:52:17 compute-0 podman[248387]: 2025-11-26 12:52:17.594683782 +0000 UTC m=+0.017638238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:52:17 compute-0 systemd[1]: libpod-conmon-1811cfeae0cc8bcdae11be5fa607104fd06aa72538596628bde5f92ec057c6af.scope: Deactivated successfully.
Nov 26 12:52:17 compute-0 podman[248422]: 2025-11-26 12:52:17.801408951 +0000 UTC m=+0.028635592 container create 4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_euler, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:52:17 compute-0 systemd[1]: Started libpod-conmon-4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32.scope.
Nov 26 12:52:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa1a418588b73a98b873450d2dda6a6d11c8b76241496c33cdaef49c3a665a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa1a418588b73a98b873450d2dda6a6d11c8b76241496c33cdaef49c3a665a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa1a418588b73a98b873450d2dda6a6d11c8b76241496c33cdaef49c3a665a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa1a418588b73a98b873450d2dda6a6d11c8b76241496c33cdaef49c3a665a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:52:17 compute-0 podman[248422]: 2025-11-26 12:52:17.85407431 +0000 UTC m=+0.081300942 container init 4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_euler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 12:52:17 compute-0 podman[248422]: 2025-11-26 12:52:17.858478882 +0000 UTC m=+0.085705513 container start 4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 12:52:17 compute-0 podman[248422]: 2025-11-26 12:52:17.859731306 +0000 UTC m=+0.086957937 container attach 4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 12:52:17 compute-0 podman[248422]: 2025-11-26 12:52:17.790469496 +0000 UTC m=+0.017696148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:52:18 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:18 compute-0 keen_euler[248436]: {
Nov 26 12:52:18 compute-0 keen_euler[248436]:     "0": [
Nov 26 12:52:18 compute-0 keen_euler[248436]:         {
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "devices": [
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "/dev/loop3"
Nov 26 12:52:18 compute-0 keen_euler[248436]:             ],
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "lv_name": "ceph_lv0",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "lv_size": "21470642176",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "name": "ceph_lv0",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "tags": {
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.cluster_name": "ceph",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.crush_device_class": "",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.encrypted": "0",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.osd_id": "0",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.type": "block",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.vdo": "0"
Nov 26 12:52:18 compute-0 keen_euler[248436]:             },
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "type": "block",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "vg_name": "ceph_vg0"
Nov 26 12:52:18 compute-0 keen_euler[248436]:         }
Nov 26 12:52:18 compute-0 keen_euler[248436]:     ],
Nov 26 12:52:18 compute-0 keen_euler[248436]:     "1": [
Nov 26 12:52:18 compute-0 keen_euler[248436]:         {
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "devices": [
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "/dev/loop4"
Nov 26 12:52:18 compute-0 keen_euler[248436]:             ],
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "lv_name": "ceph_lv1",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "lv_size": "21470642176",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "name": "ceph_lv1",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "tags": {
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.cluster_name": "ceph",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.crush_device_class": "",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.encrypted": "0",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.osd_id": "1",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.type": "block",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.vdo": "0"
Nov 26 12:52:18 compute-0 keen_euler[248436]:             },
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "type": "block",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "vg_name": "ceph_vg1"
Nov 26 12:52:18 compute-0 keen_euler[248436]:         }
Nov 26 12:52:18 compute-0 keen_euler[248436]:     ],
Nov 26 12:52:18 compute-0 keen_euler[248436]:     "2": [
Nov 26 12:52:18 compute-0 keen_euler[248436]:         {
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "devices": [
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "/dev/loop5"
Nov 26 12:52:18 compute-0 keen_euler[248436]:             ],
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "lv_name": "ceph_lv2",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "lv_size": "21470642176",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "name": "ceph_lv2",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "tags": {
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.cluster_name": "ceph",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.crush_device_class": "",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.encrypted": "0",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.osd_id": "2",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.type": "block",
Nov 26 12:52:18 compute-0 keen_euler[248436]:                 "ceph.vdo": "0"
Nov 26 12:52:18 compute-0 keen_euler[248436]:             },
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "type": "block",
Nov 26 12:52:18 compute-0 keen_euler[248436]:             "vg_name": "ceph_vg2"
Nov 26 12:52:18 compute-0 keen_euler[248436]:         }
Nov 26 12:52:18 compute-0 keen_euler[248436]:     ]
Nov 26 12:52:18 compute-0 keen_euler[248436]: }
Nov 26 12:52:18 compute-0 systemd[1]: libpod-4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32.scope: Deactivated successfully.
Nov 26 12:52:18 compute-0 conmon[248436]: conmon 4402ea175c91021f35d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32.scope/container/memory.events
Nov 26 12:52:18 compute-0 podman[248422]: 2025-11-26 12:52:18.495449954 +0000 UTC m=+0.722676595 container died 4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_euler, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:52:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfa1a418588b73a98b873450d2dda6a6d11c8b76241496c33cdaef49c3a665a7-merged.mount: Deactivated successfully.
Nov 26 12:52:18 compute-0 podman[248422]: 2025-11-26 12:52:18.524048415 +0000 UTC m=+0.751275047 container remove 4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_euler, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:52:18 compute-0 systemd[1]: libpod-conmon-4402ea175c91021f35d371f2ad32f2949d313ea1a13788261c8c7adbaf156a32.scope: Deactivated successfully.
Nov 26 12:52:18 compute-0 sudo[248330]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:18 compute-0 sudo[248455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:52:18 compute-0 sudo[248455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:18 compute-0 sudo[248455]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:18 compute-0 sudo[248480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:52:18 compute-0 sudo[248480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:18 compute-0 sudo[248480]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:18 compute-0 sudo[248505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:52:18 compute-0 sudo[248505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:18 compute-0 sudo[248505]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:18 compute-0 sudo[248530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:52:18 compute-0 sudo[248530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:18 compute-0 podman[248585]: 2025-11-26 12:52:18.921249778 +0000 UTC m=+0.025003998 container create 18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cori, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 12:52:18 compute-0 systemd[1]: Started libpod-conmon-18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af.scope.
Nov 26 12:52:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:52:18 compute-0 podman[248585]: 2025-11-26 12:52:18.956587437 +0000 UTC m=+0.060341657 container init 18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cori, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:52:18 compute-0 podman[248585]: 2025-11-26 12:52:18.960885207 +0000 UTC m=+0.064639417 container start 18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 12:52:18 compute-0 podman[248585]: 2025-11-26 12:52:18.962080282 +0000 UTC m=+0.065834492 container attach 18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 12:52:18 compute-0 frosty_cori[248598]: 167 167
Nov 26 12:52:18 compute-0 systemd[1]: libpod-18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af.scope: Deactivated successfully.
Nov 26 12:52:18 compute-0 conmon[248598]: conmon 18128b9db9be79a1c783 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af.scope/container/memory.events
Nov 26 12:52:18 compute-0 podman[248585]: 2025-11-26 12:52:18.964595059 +0000 UTC m=+0.068349268 container died 18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cori, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 12:52:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a12818cf15520940e65982f7de9d9825c0939831b50a2838c5d9c604adcad5c3-merged.mount: Deactivated successfully.
Nov 26 12:52:18 compute-0 podman[248585]: 2025-11-26 12:52:18.981959929 +0000 UTC m=+0.085714139 container remove 18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:52:18 compute-0 podman[248585]: 2025-11-26 12:52:18.91144791 +0000 UTC m=+0.015202141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:52:18 compute-0 systemd[1]: libpod-conmon-18128b9db9be79a1c783732137e68d0e0f31a9ad8c6d3a6191ff9d107d8425af.scope: Deactivated successfully.
Nov 26 12:52:19 compute-0 podman[248620]: 2025-11-26 12:52:19.100320187 +0000 UTC m=+0.026213631 container create 968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_germain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 12:52:19 compute-0 systemd[1]: Started libpod-conmon-968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8.scope.
Nov 26 12:52:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:52:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f33e0bd0f2f1c7b92835e6a8fe3cd01469dd8f033fa525d541714c7c9b360ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:52:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f33e0bd0f2f1c7b92835e6a8fe3cd01469dd8f033fa525d541714c7c9b360ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:52:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f33e0bd0f2f1c7b92835e6a8fe3cd01469dd8f033fa525d541714c7c9b360ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:52:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f33e0bd0f2f1c7b92835e6a8fe3cd01469dd8f033fa525d541714c7c9b360ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:52:19 compute-0 podman[248620]: 2025-11-26 12:52:19.158070382 +0000 UTC m=+0.083963826 container init 968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 26 12:52:19 compute-0 podman[248620]: 2025-11-26 12:52:19.163058546 +0000 UTC m=+0.088951990 container start 968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_germain, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 12:52:19 compute-0 podman[248620]: 2025-11-26 12:52:19.164454661 +0000 UTC m=+0.090348105 container attach 968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 12:52:19 compute-0 podman[248620]: 2025-11-26 12:52:19.090149814 +0000 UTC m=+0.016043268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:52:19 compute-0 ceph-mon[74966]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:19 compute-0 practical_germain[248633]: {
Nov 26 12:52:19 compute-0 practical_germain[248633]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:52:19 compute-0 practical_germain[248633]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:52:19 compute-0 practical_germain[248633]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:52:19 compute-0 practical_germain[248633]:         "osd_id": 1,
Nov 26 12:52:19 compute-0 practical_germain[248633]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:52:19 compute-0 practical_germain[248633]:         "type": "bluestore"
Nov 26 12:52:19 compute-0 practical_germain[248633]:     },
Nov 26 12:52:19 compute-0 practical_germain[248633]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:52:19 compute-0 practical_germain[248633]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:52:19 compute-0 practical_germain[248633]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:52:19 compute-0 practical_germain[248633]:         "osd_id": 2,
Nov 26 12:52:19 compute-0 practical_germain[248633]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:52:19 compute-0 practical_germain[248633]:         "type": "bluestore"
Nov 26 12:52:19 compute-0 practical_germain[248633]:     },
Nov 26 12:52:19 compute-0 practical_germain[248633]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:52:19 compute-0 practical_germain[248633]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:52:19 compute-0 practical_germain[248633]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:52:19 compute-0 practical_germain[248633]:         "osd_id": 0,
Nov 26 12:52:19 compute-0 practical_germain[248633]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:52:19 compute-0 practical_germain[248633]:         "type": "bluestore"
Nov 26 12:52:19 compute-0 practical_germain[248633]:     }
Nov 26 12:52:19 compute-0 practical_germain[248633]: }
Nov 26 12:52:19 compute-0 systemd[1]: libpod-968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8.scope: Deactivated successfully.
Nov 26 12:52:19 compute-0 podman[248666]: 2025-11-26 12:52:19.944153216 +0000 UTC m=+0.016109243 container died 968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_germain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 12:52:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f33e0bd0f2f1c7b92835e6a8fe3cd01469dd8f033fa525d541714c7c9b360ec-merged.mount: Deactivated successfully.
Nov 26 12:52:19 compute-0 podman[248666]: 2025-11-26 12:52:19.972556088 +0000 UTC m=+0.044512114 container remove 968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_germain, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 12:52:19 compute-0 systemd[1]: libpod-conmon-968609ab2edd6f7d27b1e3cf6ed4d30e58c38a254e10c2cb88f19c20086f4dc8.scope: Deactivated successfully.
Nov 26 12:52:19 compute-0 sudo[248530]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:52:20 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:52:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:52:20 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:52:20 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 4bbb12ed-7710-424f-b06e-83afed18d215 does not exist
Nov 26 12:52:20 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 58402b9a-4304-4c60-b89c-d201e4201e7c does not exist
Nov 26 12:52:20 compute-0 sudo[248678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:52:20 compute-0 sudo[248678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:20 compute-0 sudo[248678]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:20 compute-0 sudo[248703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:52:20 compute-0 sudo[248703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:52:20 compute-0 sudo[248703]: pam_unix(sudo:session): session closed for user root
Nov 26 12:52:20 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 26 12:52:20 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1403041407' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 26 12:52:20 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 26 12:52:20 compute-0 ceph-mgr[75236]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 12:52:20 compute-0 ceph-mgr[75236]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 12:52:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:52:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:52:21 compute-0 ceph-mon[74966]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:21 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/1403041407' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 26 12:52:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:52:22 compute-0 ceph-mon[74966]: from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 26 12:52:22 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.820 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.821 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.821 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.821 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.830 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.830 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.831 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.831 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.831 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.831 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.831 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.832 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.832 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.845 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.845 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.845 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.845 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 12:52:22 compute-0 nova_compute[247443]: 2025-11-26 12:52:22.846 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:52:23 compute-0 ceph-mon[74966]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:23 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 12:52:23 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/708208664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:52:23 compute-0 nova_compute[247443]: 2025-11-26 12:52:23.172 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:52:23 compute-0 nova_compute[247443]: 2025-11-26 12:52:23.356 247447 WARNING nova.virt.libvirt.driver [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 12:52:23 compute-0 nova_compute[247443]: 2025-11-26 12:52:23.357 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5222MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 12:52:23 compute-0 nova_compute[247443]: 2025-11-26 12:52:23.357 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:52:23 compute-0 nova_compute[247443]: 2025-11-26 12:52:23.357 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:52:23 compute-0 nova_compute[247443]: 2025-11-26 12:52:23.413 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 12:52:23 compute-0 nova_compute[247443]: 2025-11-26 12:52:23.414 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 12:52:23 compute-0 nova_compute[247443]: 2025-11-26 12:52:23.425 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:52:23 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 12:52:23 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1965460234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:52:23 compute-0 nova_compute[247443]: 2025-11-26 12:52:23.758 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:52:23 compute-0 nova_compute[247443]: 2025-11-26 12:52:23.762 247447 DEBUG nova.compute.provider_tree [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed in ProviderTree for provider: b5f91a62-c356-4895-a9c1-523d85f8751b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 12:52:23 compute-0 nova_compute[247443]: 2025-11-26 12:52:23.774 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed for provider b5f91a62-c356-4895-a9c1-523d85f8751b based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 12:52:23 compute-0 nova_compute[247443]: 2025-11-26 12:52:23.775 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 12:52:23 compute-0 nova_compute[247443]: 2025-11-26 12:52:23.775 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:52:24 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/708208664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:52:24 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1965460234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:52:24 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:25 compute-0 ceph-mon[74966]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:52:26 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:27 compute-0 ceph-mon[74966]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:28 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:29 compute-0 ceph-mon[74966]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:30 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:52:31 compute-0 ceph-mon[74966]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:32 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:33 compute-0 ceph-mon[74966]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:34 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:35 compute-0 ceph-mon[74966]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:52:35
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['.rgw.root', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.log', 'volumes', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta']
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:52:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:52:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:52:36 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:37 compute-0 ceph-mon[74966]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:38 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:39 compute-0 ceph-mon[74966]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:40 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:52:41 compute-0 ceph-mon[74966]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:41 compute-0 podman[248772]: 2025-11-26 12:52:41.878304868 +0000 UTC m=+0.044574390 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 26 12:52:41 compute-0 podman[248773]: 2025-11-26 12:52:41.878603643 +0000 UTC m=+0.045372006 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 12:52:42 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:43 compute-0 ceph-mon[74966]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:44 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:52:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:52:45 compute-0 ceph-mon[74966]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:52:46 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:46 compute-0 podman[248805]: 2025-11-26 12:52:46.89159708 +0000 UTC m=+0.059423493 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 26 12:52:47 compute-0 ceph-mon[74966]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:48 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:49 compute-0 ceph-mon[74966]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:50 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:52:51 compute-0 ceph-mon[74966]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:52 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:53 compute-0 ceph-mon[74966]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:54 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:55 compute-0 ceph-mon[74966]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:52:56 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:57 compute-0 ceph-mon[74966]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:58 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:52:59 compute-0 ceph-mon[74966]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:00 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:53:01 compute-0 ceph-mon[74966]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:53:01.727 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:53:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:53:01.727 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:53:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:53:01.728 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:53:02 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:03 compute-0 ceph-mon[74966]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:04 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:05 compute-0 ceph-mon[74966]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:53:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:53:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:53:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:53:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:53:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:53:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:53:06 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:07 compute-0 ceph-mon[74966]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:08 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:09 compute-0 ceph-mon[74966]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:10 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:53:11 compute-0 ceph-mon[74966]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:12 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:12 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:53:12.661 159053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:77:ce', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3b:aa:b7:c5:2f'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 26 12:53:12 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:53:12.662 159053 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 26 12:53:12 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:53:12.662 159053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1a132c77-5dda-4b90-923d-26a448f3fef6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 26 12:53:12 compute-0 podman[248829]: 2025-11-26 12:53:12.883358595 +0000 UTC m=+0.043213645 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 12:53:12 compute-0 podman[248830]: 2025-11-26 12:53:12.894380065 +0000 UTC m=+0.054439348 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 26 12:53:13 compute-0 ceph-mon[74966]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:14 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:15 compute-0 ceph-mon[74966]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:53:16 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:17 compute-0 ceph-mon[74966]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:17 compute-0 podman[248862]: 2025-11-26 12:53:17.915158124 +0000 UTC m=+0.080348148 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 12:53:18 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 12:53:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/21348683' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:53:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 12:53:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/21348683' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:53:19 compute-0 ceph-mon[74966]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:19 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/21348683' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:53:19 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/21348683' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:53:20 compute-0 sudo[248884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:53:20 compute-0 sudo[248884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:20 compute-0 sudo[248884]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:20 compute-0 sudo[248909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:53:20 compute-0 sudo[248909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:20 compute-0 sudo[248909]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:20 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:20 compute-0 sudo[248934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:53:20 compute-0 sudo[248934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:20 compute-0 sudo[248934]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:20 compute-0 sudo[248959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:53:20 compute-0 sudo[248959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:20 compute-0 sudo[248959]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:53:20 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:53:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:53:20 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:53:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:53:20 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:53:20 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 5b9e185c-a29b-4c17-8c59-ac70376c536d does not exist
Nov 26 12:53:20 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 7b38ebbb-1d79-43e8-9ad3-a2243f19b4c8 does not exist
Nov 26 12:53:20 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 0b845ad8-9658-41f1-88b7-1fdfd296be15 does not exist
Nov 26 12:53:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:53:20 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:53:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:53:20 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:53:20 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:53:20 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:53:20 compute-0 sudo[249013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:53:20 compute-0 sudo[249013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:20 compute-0 sudo[249013]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:20 compute-0 sudo[249038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:53:20 compute-0 sudo[249038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:20 compute-0 sudo[249038]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:20 compute-0 sudo[249063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:53:20 compute-0 sudo[249063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:20 compute-0 sudo[249063]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:20 compute-0 sudo[249088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:53:20 compute-0 sudo[249088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:21 compute-0 podman[249144]: 2025-11-26 12:53:21.00823178 +0000 UTC m=+0.026678200 container create 2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_feistel, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:53:21 compute-0 systemd[1]: Started libpod-conmon-2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d.scope.
Nov 26 12:53:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:53:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:53:21 compute-0 podman[249144]: 2025-11-26 12:53:21.075802432 +0000 UTC m=+0.094248853 container init 2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 12:53:21 compute-0 podman[249144]: 2025-11-26 12:53:21.081445806 +0000 UTC m=+0.099892236 container start 2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:53:21 compute-0 podman[249144]: 2025-11-26 12:53:21.083028148 +0000 UTC m=+0.101474568 container attach 2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_feistel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:53:21 compute-0 hopeful_feistel[249157]: 167 167
Nov 26 12:53:21 compute-0 systemd[1]: libpod-2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d.scope: Deactivated successfully.
Nov 26 12:53:21 compute-0 conmon[249157]: conmon 2efd60a3899f0f453a15 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d.scope/container/memory.events
Nov 26 12:53:21 compute-0 podman[249144]: 2025-11-26 12:53:21.087377333 +0000 UTC m=+0.105823752 container died 2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_feistel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:53:21 compute-0 podman[249144]: 2025-11-26 12:53:20.997625706 +0000 UTC m=+0.016072136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:53:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d24d9f3fc863308279a777bca23c2126d6e792f848a46201a82509ff065dd9cf-merged.mount: Deactivated successfully.
Nov 26 12:53:21 compute-0 podman[249144]: 2025-11-26 12:53:21.104556944 +0000 UTC m=+0.123003363 container remove 2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 12:53:21 compute-0 systemd[1]: libpod-conmon-2efd60a3899f0f453a150bcabbad3ecc860f3fb19823887180a9c4f6c1c1bf2d.scope: Deactivated successfully.
Nov 26 12:53:21 compute-0 podman[249179]: 2025-11-26 12:53:21.223058965 +0000 UTC m=+0.027743518 container create 0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brahmagupta, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 12:53:21 compute-0 systemd[1]: Started libpod-conmon-0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2.scope.
Nov 26 12:53:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b7cb2fd1453fdd8c98b1c5a0822e47a606ae1e516c3acf542dd2d26e05d4b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b7cb2fd1453fdd8c98b1c5a0822e47a606ae1e516c3acf542dd2d26e05d4b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b7cb2fd1453fdd8c98b1c5a0822e47a606ae1e516c3acf542dd2d26e05d4b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b7cb2fd1453fdd8c98b1c5a0822e47a606ae1e516c3acf542dd2d26e05d4b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b7cb2fd1453fdd8c98b1c5a0822e47a606ae1e516c3acf542dd2d26e05d4b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:53:21 compute-0 podman[249179]: 2025-11-26 12:53:21.275262291 +0000 UTC m=+0.079946844 container init 0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 12:53:21 compute-0 podman[249179]: 2025-11-26 12:53:21.282737475 +0000 UTC m=+0.087422028 container start 0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brahmagupta, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 12:53:21 compute-0 podman[249179]: 2025-11-26 12:53:21.285336402 +0000 UTC m=+0.090020956 container attach 0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brahmagupta, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 12:53:21 compute-0 podman[249179]: 2025-11-26 12:53:21.211551442 +0000 UTC m=+0.016236014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:53:21 compute-0 ceph-mon[74966]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:53:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:53:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:53:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:53:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:53:21 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:53:22 compute-0 funny_brahmagupta[249192]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:53:22 compute-0 funny_brahmagupta[249192]: --> relative data size: 1.0
Nov 26 12:53:22 compute-0 funny_brahmagupta[249192]: --> All data devices are unavailable
Nov 26 12:53:22 compute-0 systemd[1]: libpod-0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2.scope: Deactivated successfully.
Nov 26 12:53:22 compute-0 podman[249179]: 2025-11-26 12:53:22.116608873 +0000 UTC m=+0.921293426 container died 0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8b7cb2fd1453fdd8c98b1c5a0822e47a606ae1e516c3acf542dd2d26e05d4b6-merged.mount: Deactivated successfully.
Nov 26 12:53:22 compute-0 podman[249179]: 2025-11-26 12:53:22.150970349 +0000 UTC m=+0.955654902 container remove 0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 12:53:22 compute-0 systemd[1]: libpod-conmon-0c6c70913c3c9c5eb01615e2043641dc1adb2a4cabb9942b7161a438220566a2.scope: Deactivated successfully.
Nov 26 12:53:22 compute-0 sudo[249088]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:22 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:22 compute-0 sudo[249230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:53:22 compute-0 sudo[249230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:22 compute-0 sudo[249230]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:22 compute-0 sudo[249255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:53:22 compute-0 sudo[249255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:22 compute-0 sudo[249255]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:22 compute-0 sudo[249280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:53:22 compute-0 sudo[249280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:22 compute-0 sudo[249280]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:22 compute-0 sudo[249305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:53:22 compute-0 sudo[249305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:22 compute-0 podman[249361]: 2025-11-26 12:53:22.580117783 +0000 UTC m=+0.027265396 container create d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 26 12:53:22 compute-0 systemd[1]: Started libpod-conmon-d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42.scope.
Nov 26 12:53:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:53:22 compute-0 podman[249361]: 2025-11-26 12:53:22.636271132 +0000 UTC m=+0.083418755 container init d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:53:22 compute-0 podman[249361]: 2025-11-26 12:53:22.641542154 +0000 UTC m=+0.088689767 container start d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 12:53:22 compute-0 podman[249361]: 2025-11-26 12:53:22.642672934 +0000 UTC m=+0.089820547 container attach d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:53:22 compute-0 kind_almeida[249375]: 167 167
Nov 26 12:53:22 compute-0 systemd[1]: libpod-d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42.scope: Deactivated successfully.
Nov 26 12:53:22 compute-0 podman[249361]: 2025-11-26 12:53:22.645340951 +0000 UTC m=+0.092488564 container died d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 12:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a12844dcbb8aab58932258ef896a97d78f147f8af6d09fd1927cffaa18c407c-merged.mount: Deactivated successfully.
Nov 26 12:53:22 compute-0 podman[249361]: 2025-11-26 12:53:22.666115397 +0000 UTC m=+0.113263010 container remove d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 12:53:22 compute-0 podman[249361]: 2025-11-26 12:53:22.569235017 +0000 UTC m=+0.016382650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:53:22 compute-0 systemd[1]: libpod-conmon-d321d524916f0f4b641513d1b5992cbd2a31138478be2bd275f655cd6dfa1f42.scope: Deactivated successfully.
Nov 26 12:53:22 compute-0 podman[249397]: 2025-11-26 12:53:22.790947174 +0000 UTC m=+0.033723866 container create 0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 12:53:22 compute-0 systemd[1]: Started libpod-conmon-0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65.scope.
Nov 26 12:53:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:53:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2edda49e8f8befdbbcb086582d1ec96c71a51de189f754d440bdd54263c0be5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:53:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2edda49e8f8befdbbcb086582d1ec96c71a51de189f754d440bdd54263c0be5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:53:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2edda49e8f8befdbbcb086582d1ec96c71a51de189f754d440bdd54263c0be5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:53:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2edda49e8f8befdbbcb086582d1ec96c71a51de189f754d440bdd54263c0be5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:53:22 compute-0 podman[249397]: 2025-11-26 12:53:22.862278903 +0000 UTC m=+0.105055615 container init 0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 12:53:22 compute-0 podman[249397]: 2025-11-26 12:53:22.870840766 +0000 UTC m=+0.113617459 container start 0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 12:53:22 compute-0 podman[249397]: 2025-11-26 12:53:22.872169199 +0000 UTC m=+0.114945911 container attach 0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 12:53:22 compute-0 podman[249397]: 2025-11-26 12:53:22.776266141 +0000 UTC m=+0.019042853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:53:23 compute-0 ceph-mon[74966]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:23 compute-0 priceless_bouman[249410]: {
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:     "0": [
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:         {
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "devices": [
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "/dev/loop3"
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             ],
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "lv_name": "ceph_lv0",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "lv_size": "21470642176",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "name": "ceph_lv0",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "tags": {
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.cluster_name": "ceph",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.crush_device_class": "",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.encrypted": "0",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.osd_id": "0",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.type": "block",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.vdo": "0"
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             },
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "type": "block",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "vg_name": "ceph_vg0"
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:         }
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:     ],
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:     "1": [
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:         {
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "devices": [
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "/dev/loop4"
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             ],
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "lv_name": "ceph_lv1",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "lv_size": "21470642176",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "name": "ceph_lv1",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "tags": {
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.cluster_name": "ceph",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.crush_device_class": "",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.encrypted": "0",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.osd_id": "1",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.type": "block",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.vdo": "0"
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             },
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "type": "block",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "vg_name": "ceph_vg1"
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:         }
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:     ],
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:     "2": [
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:         {
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "devices": [
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "/dev/loop5"
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             ],
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "lv_name": "ceph_lv2",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "lv_size": "21470642176",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "name": "ceph_lv2",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "tags": {
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.cluster_name": "ceph",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.crush_device_class": "",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.encrypted": "0",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.osd_id": "2",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.type": "block",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:                 "ceph.vdo": "0"
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             },
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "type": "block",
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:             "vg_name": "ceph_vg2"
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:         }
Nov 26 12:53:23 compute-0 priceless_bouman[249410]:     ]
Nov 26 12:53:23 compute-0 priceless_bouman[249410]: }
Nov 26 12:53:23 compute-0 systemd[1]: libpod-0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65.scope: Deactivated successfully.
Nov 26 12:53:23 compute-0 podman[249419]: 2025-11-26 12:53:23.586992175 +0000 UTC m=+0.024844434 container died 0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:53:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2edda49e8f8befdbbcb086582d1ec96c71a51de189f754d440bdd54263c0be5b-merged.mount: Deactivated successfully.
Nov 26 12:53:23 compute-0 podman[249419]: 2025-11-26 12:53:23.628225401 +0000 UTC m=+0.066077648 container remove 0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 26 12:53:23 compute-0 systemd[1]: libpod-conmon-0fef75dab57a08c531aaf5d2f17b4c55783c3b86705d5cf0ae0dccb303ba2a65.scope: Deactivated successfully.
Nov 26 12:53:23 compute-0 sudo[249305]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:23 compute-0 sudo[249431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:53:23 compute-0 sudo[249431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:23 compute-0 sudo[249431]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:23 compute-0 sudo[249456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:53:23 compute-0 sudo[249456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:23 compute-0 sudo[249456]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.768 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:53:23 compute-0 sudo[249481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:53:23 compute-0 sudo[249481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:23 compute-0 sudo[249481]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.818 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.818 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.818 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.825 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.825 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.826 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.826 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.826 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.826 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.826 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.840 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.840 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.840 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.841 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 12:53:23 compute-0 nova_compute[247443]: 2025-11-26 12:53:23.841 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:53:23 compute-0 sudo[249506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:53:23 compute-0 sudo[249506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:24 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 12:53:24 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2284262952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:53:24 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:24 compute-0 podman[249584]: 2025-11-26 12:53:24.188460751 +0000 UTC m=+0.033455120 container create 54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 26 12:53:24 compute-0 nova_compute[247443]: 2025-11-26 12:53:24.195 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:53:24 compute-0 systemd[1]: Started libpod-conmon-54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751.scope.
Nov 26 12:53:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:53:24 compute-0 podman[249584]: 2025-11-26 12:53:24.253374536 +0000 UTC m=+0.098368925 container init 54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 12:53:24 compute-0 podman[249584]: 2025-11-26 12:53:24.259247994 +0000 UTC m=+0.104242364 container start 54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 12:53:24 compute-0 podman[249584]: 2025-11-26 12:53:24.26056719 +0000 UTC m=+0.105561589 container attach 54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:53:24 compute-0 wonderful_almeida[249599]: 167 167
Nov 26 12:53:24 compute-0 systemd[1]: libpod-54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751.scope: Deactivated successfully.
Nov 26 12:53:24 compute-0 conmon[249599]: conmon 54954175bd2527ece05d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751.scope/container/memory.events
Nov 26 12:53:24 compute-0 podman[249584]: 2025-11-26 12:53:24.265605894 +0000 UTC m=+0.110600273 container died 54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:53:24 compute-0 podman[249584]: 2025-11-26 12:53:24.174445763 +0000 UTC m=+0.019440152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:53:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-8054a82bdd911106064ed1a4bd9416caffd1ea6ebe09560674e8bc8b3dc729ed-merged.mount: Deactivated successfully.
Nov 26 12:53:24 compute-0 podman[249584]: 2025-11-26 12:53:24.295667037 +0000 UTC m=+0.140661406 container remove 54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:53:24 compute-0 systemd[1]: libpod-conmon-54954175bd2527ece05d8d9d88043b0a5734b4d94e93606a1901aad010eb8751.scope: Deactivated successfully.
Nov 26 12:53:24 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2284262952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:53:24 compute-0 podman[249621]: 2025-11-26 12:53:24.442446531 +0000 UTC m=+0.034398477 container create fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 26 12:53:24 compute-0 systemd[1]: Started libpod-conmon-fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd.scope.
Nov 26 12:53:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3917bb2ea876d2fddb90e15b83d7a1c961abfecdbfd394ca9e933e9ff77ad045/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3917bb2ea876d2fddb90e15b83d7a1c961abfecdbfd394ca9e933e9ff77ad045/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3917bb2ea876d2fddb90e15b83d7a1c961abfecdbfd394ca9e933e9ff77ad045/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3917bb2ea876d2fddb90e15b83d7a1c961abfecdbfd394ca9e933e9ff77ad045/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:53:24 compute-0 podman[249621]: 2025-11-26 12:53:24.515493242 +0000 UTC m=+0.107445198 container init fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 12:53:24 compute-0 podman[249621]: 2025-11-26 12:53:24.520709331 +0000 UTC m=+0.112661276 container start fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 12:53:24 compute-0 podman[249621]: 2025-11-26 12:53:24.523488577 +0000 UTC m=+0.115440523 container attach fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:53:24 compute-0 podman[249621]: 2025-11-26 12:53:24.426934953 +0000 UTC m=+0.018886909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:53:24 compute-0 nova_compute[247443]: 2025-11-26 12:53:24.542 247447 WARNING nova.virt.libvirt.driver [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 12:53:24 compute-0 nova_compute[247443]: 2025-11-26 12:53:24.543 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5188MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 12:53:24 compute-0 nova_compute[247443]: 2025-11-26 12:53:24.544 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:53:24 compute-0 nova_compute[247443]: 2025-11-26 12:53:24.544 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:53:24 compute-0 nova_compute[247443]: 2025-11-26 12:53:24.585 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 12:53:24 compute-0 nova_compute[247443]: 2025-11-26 12:53:24.585 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 12:53:24 compute-0 nova_compute[247443]: 2025-11-26 12:53:24.597 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:53:24 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 12:53:24 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4166629011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:53:24 compute-0 nova_compute[247443]: 2025-11-26 12:53:24.935 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:53:24 compute-0 nova_compute[247443]: 2025-11-26 12:53:24.939 247447 DEBUG nova.compute.provider_tree [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed in ProviderTree for provider: b5f91a62-c356-4895-a9c1-523d85f8751b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 12:53:24 compute-0 nova_compute[247443]: 2025-11-26 12:53:24.951 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed for provider b5f91a62-c356-4895-a9c1-523d85f8751b based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 12:53:24 compute-0 nova_compute[247443]: 2025-11-26 12:53:24.953 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 12:53:24 compute-0 nova_compute[247443]: 2025-11-26 12:53:24.953 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]: {
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:         "osd_id": 1,
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:         "type": "bluestore"
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:     },
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:         "osd_id": 2,
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:         "type": "bluestore"
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:     },
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:         "osd_id": 0,
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:         "type": "bluestore"
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]:     }
Nov 26 12:53:25 compute-0 amazing_dewdney[249635]: }
Nov 26 12:53:25 compute-0 systemd[1]: libpod-fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd.scope: Deactivated successfully.
Nov 26 12:53:25 compute-0 ceph-mon[74966]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:25 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/4166629011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:53:25 compute-0 podman[249690]: 2025-11-26 12:53:25.387023639 +0000 UTC m=+0.024104360 container died fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 12:53:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3917bb2ea876d2fddb90e15b83d7a1c961abfecdbfd394ca9e933e9ff77ad045-merged.mount: Deactivated successfully.
Nov 26 12:53:25 compute-0 podman[249690]: 2025-11-26 12:53:25.417066839 +0000 UTC m=+0.054147539 container remove fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 12:53:25 compute-0 systemd[1]: libpod-conmon-fd150ebb0a365dba6e1e9bc38ce3d2fea3280a7cc999d20fea76c038f5e760bd.scope: Deactivated successfully.
Nov 26 12:53:25 compute-0 sudo[249506]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:53:25 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:53:25 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:53:25 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:53:25 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev a0104314-0bfb-4ed2-95b5-bf6c8626b560 does not exist
Nov 26 12:53:25 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 2aeac5c0-b062-4ad2-a52a-9d4774826cef does not exist
Nov 26 12:53:25 compute-0 sudo[249702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:53:25 compute-0 sudo[249702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:25 compute-0 sudo[249702]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:25 compute-0 sudo[249727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:53:25 compute-0 sudo[249727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:53:25 compute-0 sudo[249727]: pam_unix(sudo:session): session closed for user root
Nov 26 12:53:25 compute-0 nova_compute[247443]: 2025-11-26 12:53:25.946 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:53:25 compute-0 nova_compute[247443]: 2025-11-26 12:53:25.947 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:53:25 compute-0 nova_compute[247443]: 2025-11-26 12:53:25.947 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:53:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:53:26 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:26 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:53:26 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:53:26 compute-0 ceph-mon[74966]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:28 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:29 compute-0 ceph-mon[74966]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:30 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:53:31 compute-0 ceph-mon[74966]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:32 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:33 compute-0 ceph-mon[74966]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:34 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:35 compute-0 ceph-mon[74966]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:53:35
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'backups', '.rgw.root', 'vms', 'default.rgw.control', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log']
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:53:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:53:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:53:36 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:37 compute-0 ceph-mon[74966]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:38 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:39 compute-0 ceph-mon[74966]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:40 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:53:41 compute-0 ceph-mon[74966]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:42 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:43 compute-0 ceph-mon[74966]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:43 compute-0 podman[249752]: 2025-11-26 12:53:43.892453364 +0000 UTC m=+0.052138275 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 12:53:43 compute-0 podman[249753]: 2025-11-26 12:53:43.922262934 +0000 UTC m=+0.079200477 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 12:53:44 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:53:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:53:45 compute-0 ceph-mon[74966]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:53:46 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:47 compute-0 ceph-mon[74966]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:48 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:48 compute-0 podman[249785]: 2025-11-26 12:53:48.913294332 +0000 UTC m=+0.071285933 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 12:53:49 compute-0 ceph-mon[74966]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:50 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:53:51 compute-0 ceph-mon[74966]: pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:52 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:53 compute-0 ceph-mon[74966]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:54 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:55 compute-0 ceph-mon[74966]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:53:56 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:57 compute-0 ceph-mon[74966]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:58 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:53:59 compute-0 ceph-mon[74966]: pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:00 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:54:01 compute-0 ceph-mon[74966]: pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:54:01.728 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:54:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:54:01.729 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:54:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:54:01.729 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:54:02 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:03 compute-0 ceph-mon[74966]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:04 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:05 compute-0 ceph-mon[74966]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:54:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:54:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:54:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:54:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:54:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:54:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:54:06 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:07 compute-0 ceph-mon[74966]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:08 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:09 compute-0 ceph-mon[74966]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:10 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:54:11 compute-0 ceph-mon[74966]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:12 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:13 compute-0 ceph-mon[74966]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:14 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:14 compute-0 podman[249809]: 2025-11-26 12:54:14.886126001 +0000 UTC m=+0.039735273 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Nov 26 12:54:14 compute-0 podman[249808]: 2025-11-26 12:54:14.911379746 +0000 UTC m=+0.065280908 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 26 12:54:15 compute-0 ceph-mon[74966]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:54:16 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:17 compute-0 ceph-mon[74966]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:18 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 12:54:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2577967386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:54:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 12:54:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2577967386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:54:19 compute-0 ceph-mon[74966]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:19 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/2577967386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:54:19 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/2577967386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:54:19 compute-0 podman[249841]: 2025-11-26 12:54:19.908559134 +0000 UTC m=+0.070168044 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 26 12:54:20 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:54:21 compute-0 ceph-mon[74966]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:22 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:23 compute-0 ceph-mon[74966]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:23 compute-0 nova_compute[247443]: 2025-11-26 12:54:23.818 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:54:23 compute-0 nova_compute[247443]: 2025-11-26 12:54:23.821 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 12:54:23 compute-0 nova_compute[247443]: 2025-11-26 12:54:23.821 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 12:54:23 compute-0 nova_compute[247443]: 2025-11-26 12:54:23.837 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 12:54:23 compute-0 nova_compute[247443]: 2025-11-26 12:54:23.837 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:54:23 compute-0 nova_compute[247443]: 2025-11-26 12:54:23.837 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:54:23 compute-0 nova_compute[247443]: 2025-11-26 12:54:23.837 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:54:23 compute-0 nova_compute[247443]: 2025-11-26 12:54:23.854 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:54:23 compute-0 nova_compute[247443]: 2025-11-26 12:54:23.854 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:54:23 compute-0 nova_compute[247443]: 2025-11-26 12:54:23.854 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:54:23 compute-0 nova_compute[247443]: 2025-11-26 12:54:23.854 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 12:54:23 compute-0 nova_compute[247443]: 2025-11-26 12:54:23.855 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:54:24 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 12:54:24 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2601247070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:54:24 compute-0 nova_compute[247443]: 2025-11-26 12:54:24.206 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:54:24 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:24 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2601247070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:54:24 compute-0 nova_compute[247443]: 2025-11-26 12:54:24.435 247447 WARNING nova.virt.libvirt.driver [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 12:54:24 compute-0 nova_compute[247443]: 2025-11-26 12:54:24.437 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5235MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 12:54:24 compute-0 nova_compute[247443]: 2025-11-26 12:54:24.437 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:54:24 compute-0 nova_compute[247443]: 2025-11-26 12:54:24.437 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:54:24 compute-0 nova_compute[247443]: 2025-11-26 12:54:24.484 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 12:54:24 compute-0 nova_compute[247443]: 2025-11-26 12:54:24.484 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 12:54:24 compute-0 nova_compute[247443]: 2025-11-26 12:54:24.501 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:54:24 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 12:54:24 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4095758321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:54:24 compute-0 nova_compute[247443]: 2025-11-26 12:54:24.838 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:54:24 compute-0 nova_compute[247443]: 2025-11-26 12:54:24.843 247447 DEBUG nova.compute.provider_tree [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed in ProviderTree for provider: b5f91a62-c356-4895-a9c1-523d85f8751b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 12:54:24 compute-0 nova_compute[247443]: 2025-11-26 12:54:24.854 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed for provider b5f91a62-c356-4895-a9c1-523d85f8751b based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 12:54:24 compute-0 nova_compute[247443]: 2025-11-26 12:54:24.855 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 12:54:24 compute-0 nova_compute[247443]: 2025-11-26 12:54:24.856 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:54:25 compute-0 ceph-mon[74966]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:25 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/4095758321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:54:25 compute-0 sudo[249908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:54:25 compute-0 sudo[249908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:25 compute-0 sudo[249908]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:25 compute-0 sudo[249933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:54:25 compute-0 sudo[249933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:25 compute-0 sudo[249933]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:25 compute-0 sudo[249958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:54:25 compute-0 sudo[249958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:25 compute-0 sudo[249958]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:25 compute-0 sudo[249983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:54:25 compute-0 sudo[249983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:25 compute-0 nova_compute[247443]: 2025-11-26 12:54:25.838 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:54:25 compute-0 nova_compute[247443]: 2025-11-26 12:54:25.838 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:54:25 compute-0 nova_compute[247443]: 2025-11-26 12:54:25.839 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:54:25 compute-0 nova_compute[247443]: 2025-11-26 12:54:25.839 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 12:54:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:54:26 compute-0 sudo[249983]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:54:26 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:54:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:54:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:54:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:54:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:54:26 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 4774f82d-7b5f-4620-bc80-48c703a51331 does not exist
Nov 26 12:54:26 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 2ef7cf1e-5ec8-4c45-9819-382cd4647541 does not exist
Nov 26 12:54:26 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 222bc08e-1562-4a83-9f37-bed2b555a2a5 does not exist
Nov 26 12:54:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:54:26 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:54:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:54:26 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:54:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:54:26 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:54:26 compute-0 sudo[250037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:54:26 compute-0 sudo[250037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:26 compute-0 sudo[250037]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:26 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:26 compute-0 sudo[250062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:54:26 compute-0 sudo[250062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:26 compute-0 sudo[250062]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:26 compute-0 sudo[250087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:54:26 compute-0 sudo[250087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:26 compute-0 sudo[250087]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:26 compute-0 sudo[250112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:54:26 compute-0 sudo[250112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:26 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:54:26 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:54:26 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:54:26 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:54:26 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:54:26 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:54:26 compute-0 podman[250169]: 2025-11-26 12:54:26.604349952 +0000 UTC m=+0.030391072 container create 5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:54:26 compute-0 systemd[1]: Started libpod-conmon-5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d.scope.
Nov 26 12:54:26 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:54:26 compute-0 podman[250169]: 2025-11-26 12:54:26.669084483 +0000 UTC m=+0.095125604 container init 5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 12:54:26 compute-0 podman[250169]: 2025-11-26 12:54:26.67478152 +0000 UTC m=+0.100822641 container start 5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 12:54:26 compute-0 podman[250169]: 2025-11-26 12:54:26.677328681 +0000 UTC m=+0.103369811 container attach 5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:54:26 compute-0 infallible_poincare[250183]: 167 167
Nov 26 12:54:26 compute-0 podman[250169]: 2025-11-26 12:54:26.680457389 +0000 UTC m=+0.106498508 container died 5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 12:54:26 compute-0 systemd[1]: libpod-5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d.scope: Deactivated successfully.
Nov 26 12:54:26 compute-0 podman[250169]: 2025-11-26 12:54:26.592038125 +0000 UTC m=+0.018079266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:54:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-47f574a4a3d630e6fe05b7c54ee50659c1932263d046fed0022051d90f8842df-merged.mount: Deactivated successfully.
Nov 26 12:54:26 compute-0 podman[250169]: 2025-11-26 12:54:26.706133243 +0000 UTC m=+0.132174363 container remove 5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poincare, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 12:54:26 compute-0 systemd[1]: libpod-conmon-5b2516a2498df3f3cbbad3e17ed69d5793fb27aaf290e8ae445ace3e8b1fa11d.scope: Deactivated successfully.
Nov 26 12:54:26 compute-0 nova_compute[247443]: 2025-11-26 12:54:26.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:54:26 compute-0 nova_compute[247443]: 2025-11-26 12:54:26.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:54:26 compute-0 podman[250205]: 2025-11-26 12:54:26.844249825 +0000 UTC m=+0.034922132 container create 9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 12:54:26 compute-0 systemd[1]: Started libpod-conmon-9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79.scope.
Nov 26 12:54:26 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:54:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987443d2a7cd7cfca3dfa187489d5028cdf2c8cd035ce07da497a3ce3c1f6a0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:54:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987443d2a7cd7cfca3dfa187489d5028cdf2c8cd035ce07da497a3ce3c1f6a0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:54:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987443d2a7cd7cfca3dfa187489d5028cdf2c8cd035ce07da497a3ce3c1f6a0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:54:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987443d2a7cd7cfca3dfa187489d5028cdf2c8cd035ce07da497a3ce3c1f6a0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:54:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987443d2a7cd7cfca3dfa187489d5028cdf2c8cd035ce07da497a3ce3c1f6a0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:54:26 compute-0 podman[250205]: 2025-11-26 12:54:26.908461601 +0000 UTC m=+0.099133908 container init 9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 12:54:26 compute-0 podman[250205]: 2025-11-26 12:54:26.914702905 +0000 UTC m=+0.105375192 container start 9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 12:54:26 compute-0 podman[250205]: 2025-11-26 12:54:26.915894642 +0000 UTC m=+0.106566939 container attach 9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:54:26 compute-0 podman[250205]: 2025-11-26 12:54:26.831967516 +0000 UTC m=+0.022639833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:54:27 compute-0 ceph-mon[74966]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:27 compute-0 wizardly_curran[250218]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:54:27 compute-0 wizardly_curran[250218]: --> relative data size: 1.0
Nov 26 12:54:27 compute-0 wizardly_curran[250218]: --> All data devices are unavailable
Nov 26 12:54:27 compute-0 systemd[1]: libpod-9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79.scope: Deactivated successfully.
Nov 26 12:54:27 compute-0 podman[250247]: 2025-11-26 12:54:27.809435719 +0000 UTC m=+0.020797740 container died 9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:54:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-987443d2a7cd7cfca3dfa187489d5028cdf2c8cd035ce07da497a3ce3c1f6a0c-merged.mount: Deactivated successfully.
Nov 26 12:54:27 compute-0 podman[250247]: 2025-11-26 12:54:27.843773289 +0000 UTC m=+0.055135310 container remove 9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 12:54:27 compute-0 systemd[1]: libpod-conmon-9dc87f0f0af99b152e7142f62f93318bca3aaa72c01101530914c955f98e8d79.scope: Deactivated successfully.
Nov 26 12:54:27 compute-0 sudo[250112]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:27 compute-0 sudo[250259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:54:27 compute-0 sudo[250259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:27 compute-0 sudo[250259]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:27 compute-0 sudo[250284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:54:27 compute-0 sudo[250284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:27 compute-0 sudo[250284]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:28 compute-0 sudo[250309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:54:28 compute-0 sudo[250309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:28 compute-0 sudo[250309]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:28 compute-0 sudo[250334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:54:28 compute-0 sudo[250334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:28 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:28 compute-0 podman[250389]: 2025-11-26 12:54:28.341321738 +0000 UTC m=+0.033078006 container create eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:54:28 compute-0 systemd[1]: Started libpod-conmon-eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7.scope.
Nov 26 12:54:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:54:28 compute-0 podman[250389]: 2025-11-26 12:54:28.402459531 +0000 UTC m=+0.094215788 container init eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:54:28 compute-0 podman[250389]: 2025-11-26 12:54:28.408148875 +0000 UTC m=+0.099905133 container start eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 12:54:28 compute-0 podman[250389]: 2025-11-26 12:54:28.409166393 +0000 UTC m=+0.100922651 container attach eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 12:54:28 compute-0 thirsty_haslett[250402]: 167 167
Nov 26 12:54:28 compute-0 systemd[1]: libpod-eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7.scope: Deactivated successfully.
Nov 26 12:54:28 compute-0 podman[250389]: 2025-11-26 12:54:28.412269412 +0000 UTC m=+0.104025669 container died eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 12:54:28 compute-0 podman[250389]: 2025-11-26 12:54:28.327330138 +0000 UTC m=+0.019086415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:54:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e485d8e7dc2f92dbde2cf3926848945f6e48a719588cbde45f7b43b6ed22f1b-merged.mount: Deactivated successfully.
Nov 26 12:54:28 compute-0 podman[250389]: 2025-11-26 12:54:28.43312972 +0000 UTC m=+0.124885977 container remove eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haslett, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 12:54:28 compute-0 systemd[1]: libpod-conmon-eae31419bc9625d913d85750c0fe9016ba933082eb39b2192ce5c21395a4eed7.scope: Deactivated successfully.
Nov 26 12:54:28 compute-0 podman[250425]: 2025-11-26 12:54:28.571163686 +0000 UTC m=+0.032061882 container create 16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 12:54:28 compute-0 systemd[1]: Started libpod-conmon-16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121.scope.
Nov 26 12:54:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:54:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e468e3c819b8b3eed797fcad654b12b6c6dd3feaa87c870909d651206bddbf88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:54:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e468e3c819b8b3eed797fcad654b12b6c6dd3feaa87c870909d651206bddbf88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:54:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e468e3c819b8b3eed797fcad654b12b6c6dd3feaa87c870909d651206bddbf88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:54:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e468e3c819b8b3eed797fcad654b12b6c6dd3feaa87c870909d651206bddbf88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:54:28 compute-0 podman[250425]: 2025-11-26 12:54:28.626106442 +0000 UTC m=+0.087004648 container init 16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:54:28 compute-0 podman[250425]: 2025-11-26 12:54:28.631497322 +0000 UTC m=+0.092395519 container start 16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:54:28 compute-0 podman[250425]: 2025-11-26 12:54:28.633027046 +0000 UTC m=+0.093925262 container attach 16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_panini, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:54:28 compute-0 podman[250425]: 2025-11-26 12:54:28.556853304 +0000 UTC m=+0.017751510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:54:29 compute-0 beautiful_panini[250438]: {
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:     "0": [
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:         {
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "devices": [
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "/dev/loop3"
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             ],
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "lv_name": "ceph_lv0",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "lv_size": "21470642176",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "name": "ceph_lv0",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "tags": {
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.cluster_name": "ceph",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.crush_device_class": "",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.encrypted": "0",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.osd_id": "0",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.type": "block",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.vdo": "0"
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             },
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "type": "block",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "vg_name": "ceph_vg0"
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:         }
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:     ],
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:     "1": [
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:         {
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "devices": [
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "/dev/loop4"
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             ],
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "lv_name": "ceph_lv1",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "lv_size": "21470642176",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "name": "ceph_lv1",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "tags": {
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.cluster_name": "ceph",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.crush_device_class": "",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.encrypted": "0",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.osd_id": "1",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.type": "block",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.vdo": "0"
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             },
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "type": "block",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "vg_name": "ceph_vg1"
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:         }
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:     ],
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:     "2": [
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:         {
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "devices": [
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "/dev/loop5"
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             ],
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "lv_name": "ceph_lv2",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "lv_size": "21470642176",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "name": "ceph_lv2",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "tags": {
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.cluster_name": "ceph",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.crush_device_class": "",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.encrypted": "0",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.osd_id": "2",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.type": "block",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:                 "ceph.vdo": "0"
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             },
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "type": "block",
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:             "vg_name": "ceph_vg2"
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:         }
Nov 26 12:54:29 compute-0 beautiful_panini[250438]:     ]
Nov 26 12:54:29 compute-0 beautiful_panini[250438]: }
Nov 26 12:54:29 compute-0 systemd[1]: libpod-16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121.scope: Deactivated successfully.
Nov 26 12:54:29 compute-0 podman[250425]: 2025-11-26 12:54:29.324028236 +0000 UTC m=+0.784926431 container died 16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_panini, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 12:54:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e468e3c819b8b3eed797fcad654b12b6c6dd3feaa87c870909d651206bddbf88-merged.mount: Deactivated successfully.
Nov 26 12:54:29 compute-0 podman[250425]: 2025-11-26 12:54:29.361310416 +0000 UTC m=+0.822208612 container remove 16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:54:29 compute-0 systemd[1]: libpod-conmon-16b3ac14b29ae11779e7f9316e877f29328fd3222fcd5631c2f044f2e5d59121.scope: Deactivated successfully.
Nov 26 12:54:29 compute-0 sudo[250334]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:29 compute-0 ceph-mon[74966]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:29 compute-0 sudo[250456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:54:29 compute-0 sudo[250456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:29 compute-0 sudo[250456]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:29 compute-0 sudo[250481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:54:29 compute-0 sudo[250481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:29 compute-0 sudo[250481]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:29 compute-0 sudo[250506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:54:29 compute-0 sudo[250506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:29 compute-0 sudo[250506]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:29 compute-0 sudo[250531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:54:29 compute-0 sudo[250531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:29 compute-0 podman[250587]: 2025-11-26 12:54:29.810700277 +0000 UTC m=+0.028267259 container create 8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 12:54:29 compute-0 systemd[1]: Started libpod-conmon-8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6.scope.
Nov 26 12:54:29 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:54:29 compute-0 podman[250587]: 2025-11-26 12:54:29.867820286 +0000 UTC m=+0.085387290 container init 8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:54:29 compute-0 podman[250587]: 2025-11-26 12:54:29.872393306 +0000 UTC m=+0.089960289 container start 8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:54:29 compute-0 podman[250587]: 2025-11-26 12:54:29.873696832 +0000 UTC m=+0.091263815 container attach 8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 12:54:29 compute-0 systemd[1]: libpod-8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6.scope: Deactivated successfully.
Nov 26 12:54:29 compute-0 hungry_galileo[250600]: 167 167
Nov 26 12:54:29 compute-0 podman[250587]: 2025-11-26 12:54:29.876403123 +0000 UTC m=+0.093970095 container died 8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_galileo, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:54:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-96805486b05ad36c0554a42f1b69d9d06d76c11a04886e3efeb3e8fe56a5a832-merged.mount: Deactivated successfully.
Nov 26 12:54:29 compute-0 podman[250587]: 2025-11-26 12:54:29.894194275 +0000 UTC m=+0.111761259 container remove 8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_galileo, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:54:29 compute-0 podman[250587]: 2025-11-26 12:54:29.799791215 +0000 UTC m=+0.017358218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:54:29 compute-0 systemd[1]: libpod-conmon-8a69d22452b6bbe2bc45603ccd7e29439ec61deafb19fae611beed508bb5f5b6.scope: Deactivated successfully.
Nov 26 12:54:30 compute-0 podman[250622]: 2025-11-26 12:54:30.020902957 +0000 UTC m=+0.029221007 container create eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:54:30 compute-0 systemd[1]: Started libpod-conmon-eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8.scope.
Nov 26 12:54:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8cdcd23ac0aec701b27a39457aeb2ee76d70486b2d93c8a860726595976d5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8cdcd23ac0aec701b27a39457aeb2ee76d70486b2d93c8a860726595976d5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8cdcd23ac0aec701b27a39457aeb2ee76d70486b2d93c8a860726595976d5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8cdcd23ac0aec701b27a39457aeb2ee76d70486b2d93c8a860726595976d5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:54:30 compute-0 podman[250622]: 2025-11-26 12:54:30.084986492 +0000 UTC m=+0.093304552 container init eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:54:30 compute-0 podman[250622]: 2025-11-26 12:54:30.090104207 +0000 UTC m=+0.098422257 container start eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 12:54:30 compute-0 podman[250622]: 2025-11-26 12:54:30.091496772 +0000 UTC m=+0.099814823 container attach eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 12:54:30 compute-0 podman[250622]: 2025-11-26 12:54:30.008776511 +0000 UTC m=+0.017094581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:54:30 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]: {
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:         "osd_id": 1,
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:         "type": "bluestore"
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:     },
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:         "osd_id": 2,
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:         "type": "bluestore"
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:     },
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:         "osd_id": 0,
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:         "type": "bluestore"
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]:     }
Nov 26 12:54:30 compute-0 inspiring_kalam[250635]: }
Nov 26 12:54:30 compute-0 systemd[1]: libpod-eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8.scope: Deactivated successfully.
Nov 26 12:54:30 compute-0 podman[250622]: 2025-11-26 12:54:30.869295829 +0000 UTC m=+0.877613880 container died eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 12:54:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec8cdcd23ac0aec701b27a39457aeb2ee76d70486b2d93c8a860726595976d5b-merged.mount: Deactivated successfully.
Nov 26 12:54:30 compute-0 podman[250622]: 2025-11-26 12:54:30.903690406 +0000 UTC m=+0.912008456 container remove eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 12:54:30 compute-0 systemd[1]: libpod-conmon-eeb432dc1833b73824d2e77ade9b2f8d9a53868dcbc77c15fd5d1546407a12f8.scope: Deactivated successfully.
Nov 26 12:54:30 compute-0 sudo[250531]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:54:30 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:54:30 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:54:30 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:54:30 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev fc069475-44ba-435d-8205-568f4723662f does not exist
Nov 26 12:54:30 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 0f0d6cfb-d1be-4dc4-a285-3848263945ca does not exist
Nov 26 12:54:30 compute-0 sudo[250678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:54:30 compute-0 sudo[250678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:30 compute-0 sudo[250678]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:31 compute-0 sudo[250703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:54:31 compute-0 sudo[250703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:54:31 compute-0 sudo[250703]: pam_unix(sudo:session): session closed for user root
Nov 26 12:54:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:54:31 compute-0 ceph-mon[74966]: pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:31 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:54:31 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:54:32 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:33 compute-0 ceph-mon[74966]: pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:34 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:35 compute-0 ceph-mon[74966]: pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:54:35
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'vms', '.mgr']
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:54:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:54:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:54:36 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:37 compute-0 ceph-mon[74966]: pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:38 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:39 compute-0 ceph-mon[74966]: pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:40 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:54:41 compute-0 ceph-mon[74966]: pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:42 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:43 compute-0 ceph-mon[74966]: pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:44 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:54:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:54:45 compute-0 ceph-mon[74966]: pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:45 compute-0 podman[250728]: 2025-11-26 12:54:45.88823437 +0000 UTC m=+0.048096584 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 12:54:45 compute-0 podman[250729]: 2025-11-26 12:54:45.889276284 +0000 UTC m=+0.049568669 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:54:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.063314) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161686063357, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1772, "num_deletes": 250, "total_data_size": 2897002, "memory_usage": 2936008, "flush_reason": "Manual Compaction"}
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161686068439, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1649321, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11722, "largest_seqno": 13493, "table_properties": {"data_size": 1643468, "index_size": 2928, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14922, "raw_average_key_size": 20, "raw_value_size": 1630474, "raw_average_value_size": 2212, "num_data_blocks": 135, "num_entries": 737, "num_filter_entries": 737, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764161492, "oldest_key_time": 1764161492, "file_creation_time": 1764161686, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 5148 microseconds, and 3947 cpu microseconds.
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.068467) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1649321 bytes OK
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.068483) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.068854) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.068867) EVENT_LOG_v1 {"time_micros": 1764161686068863, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.068878) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2889436, prev total WAL file size 2889436, number of live WAL files 2.
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.069707) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1610KB)], [29(7835KB)]
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161686069734, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9672651, "oldest_snapshot_seqno": -1}
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4003 keys, 7568024 bytes, temperature: kUnknown
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161686084974, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7568024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7539429, "index_size": 17477, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95305, "raw_average_key_size": 23, "raw_value_size": 7465432, "raw_average_value_size": 1864, "num_data_blocks": 763, "num_entries": 4003, "num_filter_entries": 4003, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160613, "oldest_key_time": 0, "file_creation_time": 1764161686, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.085095) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7568024 bytes
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.085447) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 633.4 rd, 495.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(10.5) write-amplify(4.6) OK, records in: 4424, records dropped: 421 output_compression: NoCompression
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.085459) EVENT_LOG_v1 {"time_micros": 1764161686085454, "job": 12, "event": "compaction_finished", "compaction_time_micros": 15271, "compaction_time_cpu_micros": 12728, "output_level": 6, "num_output_files": 1, "total_output_size": 7568024, "num_input_records": 4424, "num_output_records": 4003, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161686085681, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161686086599, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.069658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.086618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.086621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.086622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.086623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:54:46 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:54:46.086624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:54:46 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:47 compute-0 ceph-mon[74966]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:48 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:49 compute-0 ceph-mon[74966]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:50 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:50 compute-0 podman[250761]: 2025-11-26 12:54:50.922413329 +0000 UTC m=+0.089872223 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 12:54:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:54:51 compute-0 ceph-mon[74966]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:52 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:53 compute-0 ceph-mon[74966]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:54 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:55 compute-0 ceph-mon[74966]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:54:56 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:57 compute-0 ceph-mon[74966]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:58 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:54:59 compute-0 ceph-mon[74966]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:00 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:55:01 compute-0 ceph-mon[74966]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:55:01.729 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:55:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:55:01.729 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:55:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:55:01.729 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:55:02 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:03 compute-0 ceph-mon[74966]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:04 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:05 compute-0 ceph-mon[74966]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:55:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:55:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:55:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:55:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:55:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:55:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:55:06 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:07 compute-0 ceph-mon[74966]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:08 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:09 compute-0 ceph-mon[74966]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:10 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:55:11 compute-0 ceph-mon[74966]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:12 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:13 compute-0 ceph-mon[74966]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:14 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:15 compute-0 ceph-mon[74966]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:55:16 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:16 compute-0 podman[250785]: 2025-11-26 12:55:16.885257496 +0000 UTC m=+0.044056570 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 26 12:55:16 compute-0 podman[250786]: 2025-11-26 12:55:16.894225819 +0000 UTC m=+0.049885084 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 26 12:55:17 compute-0 ceph-mon[74966]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:18 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 12:55:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3761241431' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:55:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 12:55:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3761241431' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:55:19 compute-0 ceph-mon[74966]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:19 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/3761241431' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:55:19 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/3761241431' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:55:20 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:55:21 compute-0 ceph-mon[74966]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:21 compute-0 podman[250818]: 2025-11-26 12:55:21.902372084 +0000 UTC m=+0.066182622 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 26 12:55:22 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:23 compute-0 ceph-mon[74966]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:23 compute-0 nova_compute[247443]: 2025-11-26 12:55:23.815 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:55:23 compute-0 nova_compute[247443]: 2025-11-26 12:55:23.826 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:55:23 compute-0 nova_compute[247443]: 2025-11-26 12:55:23.827 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 12:55:23 compute-0 nova_compute[247443]: 2025-11-26 12:55:23.827 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 12:55:23 compute-0 nova_compute[247443]: 2025-11-26 12:55:23.834 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 12:55:23 compute-0 nova_compute[247443]: 2025-11-26 12:55:23.834 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:55:24 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:24 compute-0 nova_compute[247443]: 2025-11-26 12:55:24.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:55:24 compute-0 nova_compute[247443]: 2025-11-26 12:55:24.819 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 12:55:25 compute-0 ceph-mon[74966]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:25 compute-0 nova_compute[247443]: 2025-11-26 12:55:25.815 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:55:25 compute-0 nova_compute[247443]: 2025-11-26 12:55:25.818 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:55:25 compute-0 nova_compute[247443]: 2025-11-26 12:55:25.818 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:55:25 compute-0 nova_compute[247443]: 2025-11-26 12:55:25.837 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:55:25 compute-0 nova_compute[247443]: 2025-11-26 12:55:25.838 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:55:25 compute-0 nova_compute[247443]: 2025-11-26 12:55:25.838 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:55:25 compute-0 nova_compute[247443]: 2025-11-26 12:55:25.838 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 12:55:25 compute-0 nova_compute[247443]: 2025-11-26 12:55:25.838 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:55:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:55:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 12:55:26 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/71165918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:55:26 compute-0 nova_compute[247443]: 2025-11-26 12:55:26.166 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:55:26 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:26 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/71165918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:55:26 compute-0 nova_compute[247443]: 2025-11-26 12:55:26.372 247447 WARNING nova.virt.libvirt.driver [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 12:55:26 compute-0 nova_compute[247443]: 2025-11-26 12:55:26.373 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5222MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 12:55:26 compute-0 nova_compute[247443]: 2025-11-26 12:55:26.373 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:55:26 compute-0 nova_compute[247443]: 2025-11-26 12:55:26.373 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:55:26 compute-0 nova_compute[247443]: 2025-11-26 12:55:26.416 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 12:55:26 compute-0 nova_compute[247443]: 2025-11-26 12:55:26.416 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 12:55:26 compute-0 nova_compute[247443]: 2025-11-26 12:55:26.428 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:55:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 12:55:26 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/134117797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:55:26 compute-0 nova_compute[247443]: 2025-11-26 12:55:26.759 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:55:26 compute-0 nova_compute[247443]: 2025-11-26 12:55:26.763 247447 DEBUG nova.compute.provider_tree [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed in ProviderTree for provider: b5f91a62-c356-4895-a9c1-523d85f8751b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 12:55:26 compute-0 nova_compute[247443]: 2025-11-26 12:55:26.774 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed for provider b5f91a62-c356-4895-a9c1-523d85f8751b based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 12:55:26 compute-0 nova_compute[247443]: 2025-11-26 12:55:26.775 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 12:55:26 compute-0 nova_compute[247443]: 2025-11-26 12:55:26.775 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:55:27 compute-0 ceph-mon[74966]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:27 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/134117797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:55:27 compute-0 nova_compute[247443]: 2025-11-26 12:55:27.776 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:55:27 compute-0 nova_compute[247443]: 2025-11-26 12:55:27.818 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:55:28 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:28 compute-0 nova_compute[247443]: 2025-11-26 12:55:28.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:55:29 compute-0 ceph-mon[74966]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:30 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:55:31 compute-0 sudo[250886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:55:31 compute-0 sudo[250886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:31 compute-0 sudo[250886]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:31 compute-0 sudo[250911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:55:31 compute-0 sudo[250911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:31 compute-0 sudo[250911]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:31 compute-0 sudo[250936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:55:31 compute-0 sudo[250936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:31 compute-0 sudo[250936]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:31 compute-0 sudo[250961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:55:31 compute-0 sudo[250961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:31 compute-0 ceph-mon[74966]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:31 compute-0 sudo[250961]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:55:31 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:55:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:55:31 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:55:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:55:31 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:55:31 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev be587424-8db2-4519-aac1-36b0b7b29987 does not exist
Nov 26 12:55:31 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 0298f505-f118-4a8b-b7fe-717fc56dcd4a does not exist
Nov 26 12:55:31 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev fccfecd7-7114-47ae-bb85-65357551b3b5 does not exist
Nov 26 12:55:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:55:31 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:55:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:55:31 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:55:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:55:31 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:55:31 compute-0 sudo[251015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:55:31 compute-0 sudo[251015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:31 compute-0 sudo[251015]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:31 compute-0 sudo[251040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:55:31 compute-0 sudo[251040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:31 compute-0 sudo[251040]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:31 compute-0 sudo[251065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:55:31 compute-0 sudo[251065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:31 compute-0 sudo[251065]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:31 compute-0 sudo[251090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:55:31 compute-0 sudo[251090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:31 compute-0 podman[251146]: 2025-11-26 12:55:31.998928121 +0000 UTC m=+0.028570981 container create f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 12:55:32 compute-0 systemd[1]: Started libpod-conmon-f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1.scope.
Nov 26 12:55:32 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:55:32 compute-0 podman[251146]: 2025-11-26 12:55:32.067183329 +0000 UTC m=+0.096826179 container init f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kare, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:55:32 compute-0 podman[251146]: 2025-11-26 12:55:32.073440742 +0000 UTC m=+0.103083592 container start f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:55:32 compute-0 podman[251146]: 2025-11-26 12:55:32.074492956 +0000 UTC m=+0.104135807 container attach f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:55:32 compute-0 infallible_kare[251159]: 167 167
Nov 26 12:55:32 compute-0 systemd[1]: libpod-f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1.scope: Deactivated successfully.
Nov 26 12:55:32 compute-0 podman[251146]: 2025-11-26 12:55:32.07843736 +0000 UTC m=+0.108080210 container died f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 12:55:32 compute-0 podman[251146]: 2025-11-26 12:55:31.987990676 +0000 UTC m=+0.017633546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:55:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-28f8e0d5fdfeac7bfc30a5cb268fc34e54b85c2c2c0254b77f192f9649941046-merged.mount: Deactivated successfully.
Nov 26 12:55:32 compute-0 podman[251146]: 2025-11-26 12:55:32.09947958 +0000 UTC m=+0.129122430 container remove f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:55:32 compute-0 systemd[1]: libpod-conmon-f1be956df823da680fefc5ce16a7183146e424b38022b9242f2cce60001edab1.scope: Deactivated successfully.
Nov 26 12:55:32 compute-0 podman[251181]: 2025-11-26 12:55:32.220537 +0000 UTC m=+0.027562601 container create 5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 26 12:55:32 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:32 compute-0 systemd[1]: Started libpod-conmon-5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665.scope.
Nov 26 12:55:32 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2852556dcc9f5e16720b5dcaa771aa5692ea4ca429de137781ac08e3149e0a1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2852556dcc9f5e16720b5dcaa771aa5692ea4ca429de137781ac08e3149e0a1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2852556dcc9f5e16720b5dcaa771aa5692ea4ca429de137781ac08e3149e0a1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2852556dcc9f5e16720b5dcaa771aa5692ea4ca429de137781ac08e3149e0a1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2852556dcc9f5e16720b5dcaa771aa5692ea4ca429de137781ac08e3149e0a1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:55:32 compute-0 podman[251181]: 2025-11-26 12:55:32.283420413 +0000 UTC m=+0.090446014 container init 5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 12:55:32 compute-0 podman[251181]: 2025-11-26 12:55:32.290094051 +0000 UTC m=+0.097119642 container start 5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 12:55:32 compute-0 podman[251181]: 2025-11-26 12:55:32.293136835 +0000 UTC m=+0.100162447 container attach 5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:55:32 compute-0 podman[251181]: 2025-11-26 12:55:32.210123303 +0000 UTC m=+0.017148904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:55:32 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:55:32 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:55:32 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:55:32 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:55:32 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:55:32 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:55:33 compute-0 exciting_jang[251194]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:55:33 compute-0 exciting_jang[251194]: --> relative data size: 1.0
Nov 26 12:55:33 compute-0 exciting_jang[251194]: --> All data devices are unavailable
Nov 26 12:55:33 compute-0 systemd[1]: libpod-5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665.scope: Deactivated successfully.
Nov 26 12:55:33 compute-0 podman[251181]: 2025-11-26 12:55:33.11177695 +0000 UTC m=+0.918802542 container died 5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 12:55:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2852556dcc9f5e16720b5dcaa771aa5692ea4ca429de137781ac08e3149e0a1e-merged.mount: Deactivated successfully.
Nov 26 12:55:33 compute-0 podman[251181]: 2025-11-26 12:55:33.144448901 +0000 UTC m=+0.951474492 container remove 5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:55:33 compute-0 systemd[1]: libpod-conmon-5eea7f0beebf336fce8692f97e3050d192c8924e4c26bae68163aad487125665.scope: Deactivated successfully.
Nov 26 12:55:33 compute-0 sudo[251090]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:33 compute-0 sudo[251234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:55:33 compute-0 sudo[251234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:33 compute-0 sudo[251234]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:33 compute-0 sudo[251259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:55:33 compute-0 sudo[251259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:33 compute-0 sudo[251259]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:33 compute-0 sudo[251284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:55:33 compute-0 sudo[251284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:33 compute-0 sudo[251284]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:33 compute-0 sudo[251309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:55:33 compute-0 sudo[251309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:33 compute-0 ceph-mon[74966]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:33 compute-0 podman[251366]: 2025-11-26 12:55:33.592194032 +0000 UTC m=+0.027042050 container create a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 12:55:33 compute-0 systemd[1]: Started libpod-conmon-a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c.scope.
Nov 26 12:55:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:55:33 compute-0 podman[251366]: 2025-11-26 12:55:33.647826348 +0000 UTC m=+0.082674377 container init a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banach, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 12:55:33 compute-0 podman[251366]: 2025-11-26 12:55:33.652721884 +0000 UTC m=+0.087569904 container start a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:55:33 compute-0 podman[251366]: 2025-11-26 12:55:33.653933609 +0000 UTC m=+0.088781628 container attach a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:55:33 compute-0 relaxed_banach[251379]: 167 167
Nov 26 12:55:33 compute-0 systemd[1]: libpod-a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c.scope: Deactivated successfully.
Nov 26 12:55:33 compute-0 podman[251366]: 2025-11-26 12:55:33.656378136 +0000 UTC m=+0.091226155 container died a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banach, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:55:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8afec17353f304ef5d30e597ce2ae1fe08f6a5928b00a8c3ad7aa7c3e2f829d-merged.mount: Deactivated successfully.
Nov 26 12:55:33 compute-0 podman[251366]: 2025-11-26 12:55:33.67255678 +0000 UTC m=+0.107404798 container remove a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 12:55:33 compute-0 podman[251366]: 2025-11-26 12:55:33.581843042 +0000 UTC m=+0.016691081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:55:33 compute-0 systemd[1]: libpod-conmon-a408adc1f030c23b82e7d80395ff9f2bb9eb1b3e45766efbdd278d861f5f614c.scope: Deactivated successfully.
Nov 26 12:55:33 compute-0 podman[251400]: 2025-11-26 12:55:33.794422902 +0000 UTC m=+0.028037105 container create 5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:55:33 compute-0 systemd[1]: Started libpod-conmon-5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b.scope.
Nov 26 12:55:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6bd37b185f43e69b5baa9b4476765608ff6a19e4bf198021d6dd21b91410b5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6bd37b185f43e69b5baa9b4476765608ff6a19e4bf198021d6dd21b91410b5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6bd37b185f43e69b5baa9b4476765608ff6a19e4bf198021d6dd21b91410b5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6bd37b185f43e69b5baa9b4476765608ff6a19e4bf198021d6dd21b91410b5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:55:33 compute-0 podman[251400]: 2025-11-26 12:55:33.85189684 +0000 UTC m=+0.085511042 container init 5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:55:33 compute-0 podman[251400]: 2025-11-26 12:55:33.857824532 +0000 UTC m=+0.091438734 container start 5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 12:55:33 compute-0 podman[251400]: 2025-11-26 12:55:33.858921119 +0000 UTC m=+0.092535321 container attach 5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 12:55:33 compute-0 podman[251400]: 2025-11-26 12:55:33.783329093 +0000 UTC m=+0.016943315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:55:34 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]: {
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:     "0": [
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:         {
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "devices": [
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "/dev/loop3"
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             ],
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "lv_name": "ceph_lv0",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "lv_size": "21470642176",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "name": "ceph_lv0",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "tags": {
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.cluster_name": "ceph",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.crush_device_class": "",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.encrypted": "0",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.osd_id": "0",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.type": "block",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.vdo": "0"
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             },
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "type": "block",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "vg_name": "ceph_vg0"
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:         }
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:     ],
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:     "1": [
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:         {
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "devices": [
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "/dev/loop4"
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             ],
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "lv_name": "ceph_lv1",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "lv_size": "21470642176",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "name": "ceph_lv1",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "tags": {
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.cluster_name": "ceph",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.crush_device_class": "",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.encrypted": "0",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.osd_id": "1",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.type": "block",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.vdo": "0"
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             },
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "type": "block",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "vg_name": "ceph_vg1"
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:         }
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:     ],
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:     "2": [
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:         {
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "devices": [
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "/dev/loop5"
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             ],
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "lv_name": "ceph_lv2",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "lv_size": "21470642176",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "name": "ceph_lv2",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "tags": {
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.cluster_name": "ceph",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.crush_device_class": "",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.encrypted": "0",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.osd_id": "2",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.type": "block",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:                 "ceph.vdo": "0"
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             },
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "type": "block",
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:             "vg_name": "ceph_vg2"
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:         }
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]:     ]
Nov 26 12:55:34 compute-0 unruffled_bassi[251413]: }
Nov 26 12:55:34 compute-0 systemd[1]: libpod-5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b.scope: Deactivated successfully.
Nov 26 12:55:34 compute-0 podman[251422]: 2025-11-26 12:55:34.525122667 +0000 UTC m=+0.017846678 container died 5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:55:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6bd37b185f43e69b5baa9b4476765608ff6a19e4bf198021d6dd21b91410b5c-merged.mount: Deactivated successfully.
Nov 26 12:55:34 compute-0 podman[251422]: 2025-11-26 12:55:34.554449141 +0000 UTC m=+0.047173143 container remove 5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 12:55:34 compute-0 systemd[1]: libpod-conmon-5021c4c563a9f324554f52355b7b92ff8f44f709f7e41fe55b46f343803e1a7b.scope: Deactivated successfully.
Nov 26 12:55:34 compute-0 sudo[251309]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:34 compute-0 sudo[251434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:55:34 compute-0 sudo[251434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:34 compute-0 sudo[251434]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:34 compute-0 sudo[251459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:55:34 compute-0 sudo[251459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:34 compute-0 sudo[251459]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:34 compute-0 sudo[251484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:55:34 compute-0 sudo[251484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:34 compute-0 sudo[251484]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:34 compute-0 sudo[251509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:55:34 compute-0 sudo[251509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:34 compute-0 podman[251564]: 2025-11-26 12:55:34.964472474 +0000 UTC m=+0.025169972 container create 0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 12:55:34 compute-0 systemd[1]: Started libpod-conmon-0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae.scope.
Nov 26 12:55:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:55:35 compute-0 podman[251564]: 2025-11-26 12:55:35.014870525 +0000 UTC m=+0.075568022 container init 0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 12:55:35 compute-0 podman[251564]: 2025-11-26 12:55:35.01955854 +0000 UTC m=+0.080256028 container start 0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 12:55:35 compute-0 podman[251564]: 2025-11-26 12:55:35.020681988 +0000 UTC m=+0.081379495 container attach 0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 12:55:35 compute-0 frosty_liskov[251577]: 167 167
Nov 26 12:55:35 compute-0 systemd[1]: libpod-0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae.scope: Deactivated successfully.
Nov 26 12:55:35 compute-0 conmon[251577]: conmon 0c2944deb2be9d0631c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae.scope/container/memory.events
Nov 26 12:55:35 compute-0 podman[251564]: 2025-11-26 12:55:35.023341971 +0000 UTC m=+0.084039459 container died 0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_liskov, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 12:55:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f55d9a376275e0167f042db54dd7b9e4fa272c9257163668c947a4cdcabfa210-merged.mount: Deactivated successfully.
Nov 26 12:55:35 compute-0 podman[251564]: 2025-11-26 12:55:35.046434376 +0000 UTC m=+0.107131864 container remove 0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 12:55:35 compute-0 podman[251564]: 2025-11-26 12:55:34.954169635 +0000 UTC m=+0.014867123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:55:35 compute-0 systemd[1]: libpod-conmon-0c2944deb2be9d0631c4c1a575ec888997953ce0dee38912dd4fc83bc5c68aae.scope: Deactivated successfully.
Nov 26 12:55:35 compute-0 podman[251598]: 2025-11-26 12:55:35.16220424 +0000 UTC m=+0.026294131 container create 329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 12:55:35 compute-0 systemd[1]: Started libpod-conmon-329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7.scope.
Nov 26 12:55:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61537f810cdc2ace1b85a9dbffea889a1d622bc2518d683dbb1432daf7dd22d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61537f810cdc2ace1b85a9dbffea889a1d622bc2518d683dbb1432daf7dd22d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61537f810cdc2ace1b85a9dbffea889a1d622bc2518d683dbb1432daf7dd22d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61537f810cdc2ace1b85a9dbffea889a1d622bc2518d683dbb1432daf7dd22d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:55:35 compute-0 podman[251598]: 2025-11-26 12:55:35.218601337 +0000 UTC m=+0.082691248 container init 329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 12:55:35 compute-0 podman[251598]: 2025-11-26 12:55:35.224601937 +0000 UTC m=+0.088691838 container start 329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 12:55:35 compute-0 podman[251598]: 2025-11-26 12:55:35.225568328 +0000 UTC m=+0.089658219 container attach 329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 12:55:35 compute-0 podman[251598]: 2025-11-26 12:55:35.152079476 +0000 UTC m=+0.016169368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:55:35 compute-0 ceph-mon[74966]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:55:35
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'vms', '.mgr', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'backups']
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:55:35 compute-0 frosty_gauss[251611]: {
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:         "osd_id": 1,
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:         "type": "bluestore"
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:     },
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:         "osd_id": 2,
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:         "type": "bluestore"
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:     },
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:         "osd_id": 0,
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:         "type": "bluestore"
Nov 26 12:55:35 compute-0 frosty_gauss[251611]:     }
Nov 26 12:55:35 compute-0 frosty_gauss[251611]: }
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:55:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:55:35 compute-0 systemd[1]: libpod-329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7.scope: Deactivated successfully.
Nov 26 12:55:35 compute-0 podman[251598]: 2025-11-26 12:55:35.99144866 +0000 UTC m=+0.855538552 container died 329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:55:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-61537f810cdc2ace1b85a9dbffea889a1d622bc2518d683dbb1432daf7dd22d2-merged.mount: Deactivated successfully.
Nov 26 12:55:36 compute-0 podman[251598]: 2025-11-26 12:55:36.023095419 +0000 UTC m=+0.887185310 container remove 329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 12:55:36 compute-0 systemd[1]: libpod-conmon-329619a0178538c1dfc139b4499fdbc8ebade68e1f790575d12447d583079af7.scope: Deactivated successfully.
Nov 26 12:55:36 compute-0 sudo[251509]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:55:36 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:55:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:55:36 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:55:36 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 0f6d0b5d-345f-4c94-8a3c-b56a06abf1e4 does not exist
Nov 26 12:55:36 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 4ef72ba1-cbbd-4051-b5d8-759c1afda08b does not exist
Nov 26 12:55:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:55:36 compute-0 sudo[251655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:55:36 compute-0 sudo[251655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:36 compute-0 sudo[251655]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:36 compute-0 sudo[251680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:55:36 compute-0 sudo[251680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:55:36 compute-0 sudo[251680]: pam_unix(sudo:session): session closed for user root
Nov 26 12:55:36 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:37 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:55:37 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:55:37 compute-0 ceph-mon[74966]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:38 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:39 compute-0 ceph-mon[74966]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:40 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:55:41 compute-0 ceph-mon[74966]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:42 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:43 compute-0 ceph-mon[74966]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:44 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:55:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:55:45 compute-0 ceph-mon[74966]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:55:46 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:47 compute-0 ceph-mon[74966]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:47 compute-0 podman[251706]: 2025-11-26 12:55:47.882876396 +0000 UTC m=+0.047373110 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 12:55:47 compute-0 podman[251705]: 2025-11-26 12:55:47.910348365 +0000 UTC m=+0.074893661 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 12:55:48 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:49 compute-0 ceph-mon[74966]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:50 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:55:51 compute-0 ceph-mon[74966]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:52 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:52 compute-0 podman[251739]: 2025-11-26 12:55:52.889554451 +0000 UTC m=+0.056444968 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 26 12:55:53 compute-0 ceph-mon[74966]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:54 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:55 compute-0 ceph-mon[74966]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:55:56 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:57 compute-0 ceph-mon[74966]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:58 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:55:59 compute-0 ceph-mon[74966]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:00 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:56:01 compute-0 ceph-mon[74966]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:56:01.729 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:56:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:56:01.730 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:56:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:56:01.730 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:56:02 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:03 compute-0 ceph-mon[74966]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:04 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:05 compute-0 ceph-mon[74966]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:56:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:56:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:56:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:56:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:56:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:56:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:56:06 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:07 compute-0 ceph-mon[74966]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:08 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:09 compute-0 ceph-mon[74966]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:10 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:56:11 compute-0 ceph-mon[74966]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:12 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:13 compute-0 ceph-mon[74966]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:14 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:15 compute-0 ceph-mon[74966]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:56:16 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:17 compute-0 ceph-mon[74966]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:18 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 12:56:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3968811267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:56:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 12:56:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3968811267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:56:18 compute-0 podman[251762]: 2025-11-26 12:56:18.880947045 +0000 UTC m=+0.039971790 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 12:56:18 compute-0 podman[251763]: 2025-11-26 12:56:18.88429721 +0000 UTC m=+0.041421974 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 26 12:56:19 compute-0 ceph-mon[74966]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:19 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/3968811267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:56:19 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/3968811267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:56:20 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:56:21 compute-0 ceph-mon[74966]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:22 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:22 compute-0 nova_compute[247443]: 2025-11-26 12:56:22.820 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:56:22 compute-0 nova_compute[247443]: 2025-11-26 12:56:22.820 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 26 12:56:22 compute-0 nova_compute[247443]: 2025-11-26 12:56:22.834 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 26 12:56:22 compute-0 nova_compute[247443]: 2025-11-26 12:56:22.834 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:56:22 compute-0 nova_compute[247443]: 2025-11-26 12:56:22.834 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 26 12:56:22 compute-0 nova_compute[247443]: 2025-11-26 12:56:22.842 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:56:23 compute-0 ceph-mon[74966]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:23 compute-0 podman[251797]: 2025-11-26 12:56:23.893593361 +0000 UTC m=+0.057706025 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 26 12:56:24 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:24 compute-0 nova_compute[247443]: 2025-11-26 12:56:24.847 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:56:24 compute-0 nova_compute[247443]: 2025-11-26 12:56:24.848 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 12:56:24 compute-0 nova_compute[247443]: 2025-11-26 12:56:24.848 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 12:56:24 compute-0 nova_compute[247443]: 2025-11-26 12:56:24.859 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 12:56:24 compute-0 nova_compute[247443]: 2025-11-26 12:56:24.859 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:56:25 compute-0 ceph-mon[74966]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:56:26 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:26 compute-0 nova_compute[247443]: 2025-11-26 12:56:26.818 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:56:26 compute-0 nova_compute[247443]: 2025-11-26 12:56:26.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:56:26 compute-0 nova_compute[247443]: 2025-11-26 12:56:26.819 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 12:56:26 compute-0 nova_compute[247443]: 2025-11-26 12:56:26.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:56:26 compute-0 nova_compute[247443]: 2025-11-26 12:56:26.839 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:56:26 compute-0 nova_compute[247443]: 2025-11-26 12:56:26.840 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:56:26 compute-0 nova_compute[247443]: 2025-11-26 12:56:26.840 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:56:26 compute-0 nova_compute[247443]: 2025-11-26 12:56:26.840 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 12:56:26 compute-0 nova_compute[247443]: 2025-11-26 12:56:26.840 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:56:27 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 12:56:27 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2536353661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:56:27 compute-0 nova_compute[247443]: 2025-11-26 12:56:27.168 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:56:27 compute-0 nova_compute[247443]: 2025-11-26 12:56:27.359 247447 WARNING nova.virt.libvirt.driver [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 12:56:27 compute-0 nova_compute[247443]: 2025-11-26 12:56:27.361 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5228MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 12:56:27 compute-0 nova_compute[247443]: 2025-11-26 12:56:27.361 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:56:27 compute-0 nova_compute[247443]: 2025-11-26 12:56:27.361 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:56:27 compute-0 ceph-mon[74966]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:27 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2536353661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:56:27 compute-0 nova_compute[247443]: 2025-11-26 12:56:27.523 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 12:56:27 compute-0 nova_compute[247443]: 2025-11-26 12:56:27.523 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 12:56:27 compute-0 nova_compute[247443]: 2025-11-26 12:56:27.597 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Refreshing inventories for resource provider b5f91a62-c356-4895-a9c1-523d85f8751b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 26 12:56:27 compute-0 nova_compute[247443]: 2025-11-26 12:56:27.664 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Updating ProviderTree inventory for provider b5f91a62-c356-4895-a9c1-523d85f8751b from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 26 12:56:27 compute-0 nova_compute[247443]: 2025-11-26 12:56:27.665 247447 DEBUG nova.compute.provider_tree [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Updating inventory in ProviderTree for provider b5f91a62-c356-4895-a9c1-523d85f8751b with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 26 12:56:27 compute-0 nova_compute[247443]: 2025-11-26 12:56:27.678 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Refreshing aggregate associations for resource provider b5f91a62-c356-4895-a9c1-523d85f8751b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 26 12:56:27 compute-0 nova_compute[247443]: 2025-11-26 12:56:27.696 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Refreshing trait associations for resource provider b5f91a62-c356-4895-a9c1-523d85f8751b, traits: HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX512VAES,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_2_0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 26 12:56:27 compute-0 nova_compute[247443]: 2025-11-26 12:56:27.708 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:56:28 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 12:56:28 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/252917307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:56:28 compute-0 nova_compute[247443]: 2025-11-26 12:56:28.036 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:56:28 compute-0 nova_compute[247443]: 2025-11-26 12:56:28.040 247447 DEBUG nova.compute.provider_tree [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed in ProviderTree for provider: b5f91a62-c356-4895-a9c1-523d85f8751b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 12:56:28 compute-0 nova_compute[247443]: 2025-11-26 12:56:28.052 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed for provider b5f91a62-c356-4895-a9c1-523d85f8751b based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 12:56:28 compute-0 nova_compute[247443]: 2025-11-26 12:56:28.053 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 12:56:28 compute-0 nova_compute[247443]: 2025-11-26 12:56:28.053 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:56:28 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:28 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/252917307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:56:29 compute-0 nova_compute[247443]: 2025-11-26 12:56:29.050 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:56:29 compute-0 nova_compute[247443]: 2025-11-26 12:56:29.051 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:56:29 compute-0 nova_compute[247443]: 2025-11-26 12:56:29.051 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:56:29 compute-0 ceph-mon[74966]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:29 compute-0 nova_compute[247443]: 2025-11-26 12:56:29.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:56:30 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:56:31 compute-0 ceph-mon[74966]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:32 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:33 compute-0 ceph-mon[74966]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:34 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:35 compute-0 ceph-mon[74966]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:56:35
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'backups', '.rgw.root', 'vms']
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:56:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:56:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.077368) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161796077401, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1343, "num_deletes": 506, "total_data_size": 1616483, "memory_usage": 1646048, "flush_reason": "Manual Compaction"}
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161796082554, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1600915, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13494, "largest_seqno": 14836, "table_properties": {"data_size": 1594983, "index_size": 2752, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 14869, "raw_average_key_size": 18, "raw_value_size": 1581196, "raw_average_value_size": 1916, "num_data_blocks": 126, "num_entries": 825, "num_filter_entries": 825, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764161687, "oldest_key_time": 1764161687, "file_creation_time": 1764161796, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 5204 microseconds, and 3925 cpu microseconds.
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.082579) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1600915 bytes OK
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.082592) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.082959) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.082981) EVENT_LOG_v1 {"time_micros": 1764161796082966, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.082990) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1609408, prev total WAL file size 1609408, number of live WAL files 2.
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.083381) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1563KB)], [32(7390KB)]
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161796083409, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9168939, "oldest_snapshot_seqno": -1}
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3803 keys, 7140974 bytes, temperature: kUnknown
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161796098700, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7140974, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7113873, "index_size": 16495, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 93229, "raw_average_key_size": 24, "raw_value_size": 7043351, "raw_average_value_size": 1852, "num_data_blocks": 699, "num_entries": 3803, "num_filter_entries": 3803, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764160613, "oldest_key_time": 0, "file_creation_time": 1764161796, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "360f285c-8dc8-4f98-b8a2-efdebada3f64", "db_session_id": "S468WH7D6IL73VDKE1V5", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.098861) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7140974 bytes
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.099184) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 598.2 rd, 465.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.2 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(10.2) write-amplify(4.5) OK, records in: 4828, records dropped: 1025 output_compression: NoCompression
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.099197) EVENT_LOG_v1 {"time_micros": 1764161796099191, "job": 14, "event": "compaction_finished", "compaction_time_micros": 15327, "compaction_time_cpu_micros": 13012, "output_level": 6, "num_output_files": 1, "total_output_size": 7140974, "num_input_records": 4828, "num_output_records": 3803, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161796099465, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764161796100555, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.083325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.100588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.100590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.100591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.100592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:56:36 compute-0 ceph-mon[74966]: rocksdb: (Original Log Time 2025/11/26-12:56:36.100593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 12:56:36 compute-0 sudo[251864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:56:36 compute-0 sudo[251864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:36 compute-0 sudo[251864]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:36 compute-0 sudo[251889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:56:36 compute-0 sudo[251889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:36 compute-0 sudo[251889]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:36 compute-0 sudo[251914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:56:36 compute-0 sudo[251914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:36 compute-0 sudo[251914]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:36 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:36 compute-0 sudo[251939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:56:36 compute-0 sudo[251939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:36 compute-0 sudo[251939]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:56:36 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:56:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:56:36 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:56:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:56:36 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:56:36 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev d8fe2dca-14a2-4ad6-9ea1-0e4f095e72e7 does not exist
Nov 26 12:56:36 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev b37a9705-c183-40d4-96ee-382034d27a03 does not exist
Nov 26 12:56:36 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 43071f8e-35eb-440b-b83a-770b1d3f8648 does not exist
Nov 26 12:56:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:56:36 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:56:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:56:36 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:56:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:56:36 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:56:36 compute-0 sudo[251993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:56:36 compute-0 sudo[251993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:36 compute-0 sudo[251993]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:36 compute-0 sudo[252018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:56:36 compute-0 sudo[252018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:36 compute-0 sudo[252018]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:36 compute-0 sudo[252043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:56:36 compute-0 sudo[252043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:36 compute-0 sudo[252043]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:36 compute-0 sudo[252068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:56:36 compute-0 sudo[252068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:37 compute-0 podman[252123]: 2025-11-26 12:56:37.025699434 +0000 UTC m=+0.026178057 container create 6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:56:37 compute-0 systemd[1]: Started libpod-conmon-6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9.scope.
Nov 26 12:56:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:56:37 compute-0 podman[252123]: 2025-11-26 12:56:37.075285434 +0000 UTC m=+0.075764067 container init 6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:56:37 compute-0 ceph-mon[74966]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:37 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:56:37 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:56:37 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:56:37 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:56:37 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:56:37 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:56:37 compute-0 podman[252123]: 2025-11-26 12:56:37.080277624 +0000 UTC m=+0.080756247 container start 6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:56:37 compute-0 podman[252123]: 2025-11-26 12:56:37.081446237 +0000 UTC m=+0.081924860 container attach 6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:56:37 compute-0 magical_chatterjee[252136]: 167 167
Nov 26 12:56:37 compute-0 systemd[1]: libpod-6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9.scope: Deactivated successfully.
Nov 26 12:56:37 compute-0 podman[252123]: 2025-11-26 12:56:37.084667259 +0000 UTC m=+0.085145882 container died 6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:56:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-65a8bb8e451cf84e9306be10d45ea1f4dc69d38d780be9d84cd75bb8644b31aa-merged.mount: Deactivated successfully.
Nov 26 12:56:37 compute-0 podman[252123]: 2025-11-26 12:56:37.106967375 +0000 UTC m=+0.107445998 container remove 6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:56:37 compute-0 podman[252123]: 2025-11-26 12:56:37.015249696 +0000 UTC m=+0.015728319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:56:37 compute-0 systemd[1]: libpod-conmon-6ab94b264c0568c0dbe5fffb9799c812b6ec5f190d29916168d6764753f03bc9.scope: Deactivated successfully.
Nov 26 12:56:37 compute-0 podman[252158]: 2025-11-26 12:56:37.22499678 +0000 UTC m=+0.025812198 container create b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:56:37 compute-0 systemd[1]: Started libpod-conmon-b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb.scope.
Nov 26 12:56:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66be5db121bcd0f7410c3987b71e7165a8243e825d4138bdfc5190c98c6e35e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66be5db121bcd0f7410c3987b71e7165a8243e825d4138bdfc5190c98c6e35e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66be5db121bcd0f7410c3987b71e7165a8243e825d4138bdfc5190c98c6e35e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66be5db121bcd0f7410c3987b71e7165a8243e825d4138bdfc5190c98c6e35e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66be5db121bcd0f7410c3987b71e7165a8243e825d4138bdfc5190c98c6e35e8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:56:37 compute-0 podman[252158]: 2025-11-26 12:56:37.28794155 +0000 UTC m=+0.088756988 container init b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:56:37 compute-0 podman[252158]: 2025-11-26 12:56:37.294159983 +0000 UTC m=+0.094975400 container start b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:56:37 compute-0 podman[252158]: 2025-11-26 12:56:37.295228425 +0000 UTC m=+0.096043844 container attach b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:56:37 compute-0 podman[252158]: 2025-11-26 12:56:37.214709688 +0000 UTC m=+0.015525127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:56:38 compute-0 charming_turing[252171]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:56:38 compute-0 charming_turing[252171]: --> relative data size: 1.0
Nov 26 12:56:38 compute-0 charming_turing[252171]: --> All data devices are unavailable
Nov 26 12:56:38 compute-0 systemd[1]: libpod-b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb.scope: Deactivated successfully.
Nov 26 12:56:38 compute-0 podman[252158]: 2025-11-26 12:56:38.123409786 +0000 UTC m=+0.924225225 container died b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 12:56:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-66be5db121bcd0f7410c3987b71e7165a8243e825d4138bdfc5190c98c6e35e8-merged.mount: Deactivated successfully.
Nov 26 12:56:38 compute-0 podman[252158]: 2025-11-26 12:56:38.157006336 +0000 UTC m=+0.957821754 container remove b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 12:56:38 compute-0 systemd[1]: libpod-conmon-b01e500e4f11a6059ffb3cfdc6a893dc89784a959635409a874d92eeef60fecb.scope: Deactivated successfully.
Nov 26 12:56:38 compute-0 sudo[252068]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:38 compute-0 sudo[252210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:56:38 compute-0 sudo[252210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:38 compute-0 sudo[252210]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:38 compute-0 sudo[252235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:56:38 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:38 compute-0 sudo[252235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:38 compute-0 sudo[252235]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:38 compute-0 sudo[252260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:56:38 compute-0 sudo[252260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:38 compute-0 sudo[252260]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:38 compute-0 sudo[252285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:56:38 compute-0 sudo[252285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:38 compute-0 podman[252340]: 2025-11-26 12:56:38.614624014 +0000 UTC m=+0.027708030 container create 209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 12:56:38 compute-0 systemd[1]: Started libpod-conmon-209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121.scope.
Nov 26 12:56:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:56:38 compute-0 podman[252340]: 2025-11-26 12:56:38.675798819 +0000 UTC m=+0.088882855 container init 209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:56:38 compute-0 podman[252340]: 2025-11-26 12:56:38.680914502 +0000 UTC m=+0.093998519 container start 209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 12:56:38 compute-0 podman[252340]: 2025-11-26 12:56:38.682120606 +0000 UTC m=+0.095204622 container attach 209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 12:56:38 compute-0 upbeat_mccarthy[252353]: 167 167
Nov 26 12:56:38 compute-0 systemd[1]: libpod-209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121.scope: Deactivated successfully.
Nov 26 12:56:38 compute-0 conmon[252353]: conmon 209ded90391c24872f6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121.scope/container/memory.events
Nov 26 12:56:38 compute-0 podman[252340]: 2025-11-26 12:56:38.685259823 +0000 UTC m=+0.098343839 container died 209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 12:56:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f2e1635d95e2542c9e9edcd4d3ff377d4d4a79948c5e4b97fc75d174f3288a0-merged.mount: Deactivated successfully.
Nov 26 12:56:38 compute-0 podman[252340]: 2025-11-26 12:56:38.602738631 +0000 UTC m=+0.015822657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:56:38 compute-0 podman[252340]: 2025-11-26 12:56:38.702203562 +0000 UTC m=+0.115287578 container remove 209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:56:38 compute-0 systemd[1]: libpod-conmon-209ded90391c24872f6d158447bde3417095ac458ce2080479e8aa922c537121.scope: Deactivated successfully.
Nov 26 12:56:38 compute-0 podman[252374]: 2025-11-26 12:56:38.82862156 +0000 UTC m=+0.029458450 container create 25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 12:56:38 compute-0 systemd[1]: Started libpod-conmon-25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd.scope.
Nov 26 12:56:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c440c58c9481e72350c1cb56a27b2ab4952f072b960dbff2c3cdc3150f5bdae4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c440c58c9481e72350c1cb56a27b2ab4952f072b960dbff2c3cdc3150f5bdae4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c440c58c9481e72350c1cb56a27b2ab4952f072b960dbff2c3cdc3150f5bdae4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c440c58c9481e72350c1cb56a27b2ab4952f072b960dbff2c3cdc3150f5bdae4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:56:38 compute-0 podman[252374]: 2025-11-26 12:56:38.890253826 +0000 UTC m=+0.091090717 container init 25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:56:38 compute-0 podman[252374]: 2025-11-26 12:56:38.895289819 +0000 UTC m=+0.096126710 container start 25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:56:38 compute-0 podman[252374]: 2025-11-26 12:56:38.896533254 +0000 UTC m=+0.097370144 container attach 25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 12:56:38 compute-0 podman[252374]: 2025-11-26 12:56:38.817119047 +0000 UTC m=+0.017955939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:56:39 compute-0 ceph-mon[74966]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:39 compute-0 brave_wozniak[252387]: {
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:     "0": [
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:         {
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "devices": [
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "/dev/loop3"
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             ],
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "lv_name": "ceph_lv0",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "lv_size": "21470642176",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "name": "ceph_lv0",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "tags": {
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.cluster_name": "ceph",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.crush_device_class": "",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.encrypted": "0",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.osd_id": "0",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.type": "block",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.vdo": "0"
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             },
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "type": "block",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "vg_name": "ceph_vg0"
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:         }
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:     ],
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:     "1": [
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:         {
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "devices": [
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "/dev/loop4"
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             ],
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "lv_name": "ceph_lv1",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "lv_size": "21470642176",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "name": "ceph_lv1",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "tags": {
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.cluster_name": "ceph",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.crush_device_class": "",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.encrypted": "0",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.osd_id": "1",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.type": "block",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.vdo": "0"
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             },
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "type": "block",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "vg_name": "ceph_vg1"
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:         }
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:     ],
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:     "2": [
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:         {
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "devices": [
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "/dev/loop5"
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             ],
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "lv_name": "ceph_lv2",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "lv_size": "21470642176",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "name": "ceph_lv2",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "tags": {
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.cluster_name": "ceph",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.crush_device_class": "",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.encrypted": "0",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.osd_id": "2",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.type": "block",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:                 "ceph.vdo": "0"
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             },
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "type": "block",
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:             "vg_name": "ceph_vg2"
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:         }
Nov 26 12:56:39 compute-0 brave_wozniak[252387]:     ]
Nov 26 12:56:39 compute-0 brave_wozniak[252387]: }
Nov 26 12:56:39 compute-0 podman[252374]: 2025-11-26 12:56:39.539181274 +0000 UTC m=+0.740018165 container died 25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 12:56:39 compute-0 systemd[1]: libpod-25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd.scope: Deactivated successfully.
Nov 26 12:56:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c440c58c9481e72350c1cb56a27b2ab4952f072b960dbff2c3cdc3150f5bdae4-merged.mount: Deactivated successfully.
Nov 26 12:56:39 compute-0 podman[252374]: 2025-11-26 12:56:39.567801983 +0000 UTC m=+0.768638874 container remove 25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 12:56:39 compute-0 systemd[1]: libpod-conmon-25805e648c9ec7c1e8806c50eeb9d54b26fdb7d241b87ecc1c650c35a9d479dd.scope: Deactivated successfully.
Nov 26 12:56:39 compute-0 sudo[252285]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:39 compute-0 sudo[252405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:56:39 compute-0 sudo[252405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:39 compute-0 sudo[252405]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:39 compute-0 sudo[252430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:56:39 compute-0 sudo[252430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:39 compute-0 sudo[252430]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:39 compute-0 sudo[252455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:56:39 compute-0 sudo[252455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:39 compute-0 sudo[252455]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:39 compute-0 sudo[252480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:56:39 compute-0 sudo[252480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:40 compute-0 podman[252535]: 2025-11-26 12:56:40.014774125 +0000 UTC m=+0.029935099 container create 95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:56:40 compute-0 systemd[1]: Started libpod-conmon-95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2.scope.
Nov 26 12:56:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:56:40 compute-0 podman[252535]: 2025-11-26 12:56:40.070590449 +0000 UTC m=+0.085751434 container init 95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:56:40 compute-0 podman[252535]: 2025-11-26 12:56:40.075827812 +0000 UTC m=+0.090988806 container start 95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:56:40 compute-0 podman[252535]: 2025-11-26 12:56:40.076910031 +0000 UTC m=+0.092071016 container attach 95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:56:40 compute-0 wizardly_jackson[252548]: 167 167
Nov 26 12:56:40 compute-0 systemd[1]: libpod-95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2.scope: Deactivated successfully.
Nov 26 12:56:40 compute-0 podman[252535]: 2025-11-26 12:56:40.078542378 +0000 UTC m=+0.093703362 container died 95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 12:56:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9731aad20ea468440349a9f869edf36a30c84c9beae948fa6e5b03ab385f3427-merged.mount: Deactivated successfully.
Nov 26 12:56:40 compute-0 podman[252535]: 2025-11-26 12:56:40.09795958 +0000 UTC m=+0.113120564 container remove 95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 12:56:40 compute-0 podman[252535]: 2025-11-26 12:56:40.001985719 +0000 UTC m=+0.017146703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:56:40 compute-0 systemd[1]: libpod-conmon-95fdc8888cb695c0fbcaa4913422ee4ea81764c8e9e50a53aaf62d036dafc8c2.scope: Deactivated successfully.
Nov 26 12:56:40 compute-0 podman[252570]: 2025-11-26 12:56:40.219025097 +0000 UTC m=+0.028175672 container create 0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_sutherland, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 12:56:40 compute-0 systemd[1]: Started libpod-conmon-0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e.scope.
Nov 26 12:56:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cee4566f32c048a3d574aab74cccbb29442d04bb31e393c99912ac2cc2332c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cee4566f32c048a3d574aab74cccbb29442d04bb31e393c99912ac2cc2332c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cee4566f32c048a3d574aab74cccbb29442d04bb31e393c99912ac2cc2332c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cee4566f32c048a3d574aab74cccbb29442d04bb31e393c99912ac2cc2332c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:56:40 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:40 compute-0 podman[252570]: 2025-11-26 12:56:40.275840024 +0000 UTC m=+0.084990608 container init 0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 12:56:40 compute-0 podman[252570]: 2025-11-26 12:56:40.280938434 +0000 UTC m=+0.090088999 container start 0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:56:40 compute-0 podman[252570]: 2025-11-26 12:56:40.28223558 +0000 UTC m=+0.091386143 container attach 0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:56:40 compute-0 podman[252570]: 2025-11-26 12:56:40.207600923 +0000 UTC m=+0.016751487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]: {
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:         "osd_id": 1,
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:         "type": "bluestore"
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:     },
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:         "osd_id": 2,
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:         "type": "bluestore"
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:     },
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:         "osd_id": 0,
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:         "type": "bluestore"
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]:     }
Nov 26 12:56:41 compute-0 quizzical_sutherland[252583]: }
Nov 26 12:56:41 compute-0 systemd[1]: libpod-0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e.scope: Deactivated successfully.
Nov 26 12:56:41 compute-0 podman[252570]: 2025-11-26 12:56:41.068519473 +0000 UTC m=+0.877670037 container died 0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:56:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:56:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cee4566f32c048a3d574aab74cccbb29442d04bb31e393c99912ac2cc2332c6-merged.mount: Deactivated successfully.
Nov 26 12:56:41 compute-0 podman[252570]: 2025-11-26 12:56:41.101688257 +0000 UTC m=+0.910838821 container remove 0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 12:56:41 compute-0 systemd[1]: libpod-conmon-0cf2bc76b94a92ecbc5c8bf9572dbb36e928f252a5cc1c18b10e9100be01e36e.scope: Deactivated successfully.
Nov 26 12:56:41 compute-0 sudo[252480]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:56:41 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:56:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:56:41 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:56:41 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 1183bdc7-0adc-4402-954b-5bb66cbec355 does not exist
Nov 26 12:56:41 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 9320b0de-420f-4965-9cbc-1481d047ab2e does not exist
Nov 26 12:56:41 compute-0 sudo[252625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:56:41 compute-0 sudo[252625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:41 compute-0 sudo[252625]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:41 compute-0 sudo[252650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:56:41 compute-0 sudo[252650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:56:41 compute-0 sudo[252650]: pam_unix(sudo:session): session closed for user root
Nov 26 12:56:41 compute-0 ceph-mon[74966]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:41 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:56:41 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:56:42 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:43 compute-0 ceph-mon[74966]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:44 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:56:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:56:45 compute-0 ceph-mon[74966]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:56:46 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:47 compute-0 ceph-mon[74966]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:48 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:49 compute-0 ceph-mon[74966]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:49 compute-0 podman[252675]: 2025-11-26 12:56:49.881382639 +0000 UTC m=+0.043675129 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 26 12:56:49 compute-0 podman[252676]: 2025-11-26 12:56:49.886418491 +0000 UTC m=+0.048212631 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 26 12:56:50 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:56:51 compute-0 ceph-mon[74966]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:52 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:53 compute-0 ceph-mon[74966]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:54 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:54 compute-0 podman[252708]: 2025-11-26 12:56:54.893385997 +0000 UTC m=+0.058089780 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 26 12:56:55 compute-0 ceph-mon[74966]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:55 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 12:56:55 compute-0 ceph-mon[74966]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3318 writes, 14K keys, 3318 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3318 writes, 3318 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1304 writes, 5915 keys, 1304 commit groups, 1.0 writes per commit group, ingest: 8.56 MB, 0.01 MB/s
                                           Interval WAL: 1304 writes, 1304 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    343.4      0.05              0.04         7    0.007       0      0       0.0       0.0
                                             L6      1/0    6.81 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    530.7    434.5      0.10              0.08         6    0.016     24K   3203       0.0       0.0
                                            Sum      1/0    6.81 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6    356.6    404.6      0.14              0.12        13    0.011     24K   3203       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    377.0    380.0      0.09              0.08         8    0.012     17K   2474       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    530.7    434.5      0.10              0.08         6    0.016     24K   3203       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    350.2      0.05              0.04         6    0.008       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     48.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.016, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.1 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560bd0e9b1f0#2 capacity: 308.00 MB usage: 1.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(99,1.23 MB,0.398695%) FilterBlock(14,75.55 KB,0.0239533%) IndexBlock(14,149.28 KB,0.047332%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 26 12:56:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:56:56 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:57 compute-0 ceph-mon[74966]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:58 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:56:59 compute-0 ceph-mon[74966]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:00 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:57:01 compute-0 ceph-mon[74966]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:57:01.730 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:57:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:57:01.730 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:57:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:57:01.731 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:57:02 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:03 compute-0 ceph-mon[74966]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:04 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:05 compute-0 ceph-mon[74966]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:57:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:57:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:57:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:57:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:57:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:57:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:57:06 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:07 compute-0 ceph-mon[74966]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:08 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:09 compute-0 ceph-mon[74966]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:10 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:57:11 compute-0 ceph-mon[74966]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:12 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:13 compute-0 ceph-mon[74966]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:14 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:15 compute-0 ceph-mon[74966]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:57:16 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:17 compute-0 ceph-mon[74966]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:18 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 12:57:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3537055593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:57:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 12:57:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3537055593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:57:19 compute-0 ceph-mon[74966]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:19 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/3537055593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:57:19 compute-0 ceph-mon[74966]: from='client.? 192.168.122.10:0/3537055593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:57:20 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:20 compute-0 podman[252732]: 2025-11-26 12:57:20.892829139 +0000 UTC m=+0.050170283 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 12:57:20 compute-0 podman[252731]: 2025-11-26 12:57:20.914599956 +0000 UTC m=+0.072950903 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 26 12:57:21 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:57:21 compute-0 ceph-mon[74966]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:22 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:23 compute-0 ceph-mon[74966]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:24 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:24 compute-0 nova_compute[247443]: 2025-11-26 12:57:24.820 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:57:25 compute-0 ceph-mon[74966]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:25 compute-0 nova_compute[247443]: 2025-11-26 12:57:25.820 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:57:25 compute-0 nova_compute[247443]: 2025-11-26 12:57:25.820 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 12:57:25 compute-0 nova_compute[247443]: 2025-11-26 12:57:25.820 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 12:57:25 compute-0 nova_compute[247443]: 2025-11-26 12:57:25.834 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 12:57:25 compute-0 podman[252764]: 2025-11-26 12:57:25.910643861 +0000 UTC m=+0.063586070 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 12:57:26 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:57:26 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:27 compute-0 ceph-mon[74966]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:27 compute-0 nova_compute[247443]: 2025-11-26 12:57:27.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:57:27 compute-0 nova_compute[247443]: 2025-11-26 12:57:27.820 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:57:27 compute-0 nova_compute[247443]: 2025-11-26 12:57:27.842 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:57:27 compute-0 nova_compute[247443]: 2025-11-26 12:57:27.842 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:57:27 compute-0 nova_compute[247443]: 2025-11-26 12:57:27.843 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:57:27 compute-0 nova_compute[247443]: 2025-11-26 12:57:27.843 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 12:57:27 compute-0 nova_compute[247443]: 2025-11-26 12:57:27.843 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:57:28 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 12:57:28 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/41768250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:57:28 compute-0 nova_compute[247443]: 2025-11-26 12:57:28.202 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:57:28 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:28 compute-0 nova_compute[247443]: 2025-11-26 12:57:28.429 247447 WARNING nova.virt.libvirt.driver [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 12:57:28 compute-0 nova_compute[247443]: 2025-11-26 12:57:28.430 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5223MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 12:57:28 compute-0 nova_compute[247443]: 2025-11-26 12:57:28.431 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:57:28 compute-0 nova_compute[247443]: 2025-11-26 12:57:28.431 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:57:28 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/41768250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:57:28 compute-0 nova_compute[247443]: 2025-11-26 12:57:28.478 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 12:57:28 compute-0 nova_compute[247443]: 2025-11-26 12:57:28.478 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 12:57:28 compute-0 nova_compute[247443]: 2025-11-26 12:57:28.490 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 12:57:28 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 12:57:28 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1743192530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:57:28 compute-0 nova_compute[247443]: 2025-11-26 12:57:28.837 247447 DEBUG oslo_concurrency.processutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 12:57:28 compute-0 nova_compute[247443]: 2025-11-26 12:57:28.842 247447 DEBUG nova.compute.provider_tree [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed in ProviderTree for provider: b5f91a62-c356-4895-a9c1-523d85f8751b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 12:57:28 compute-0 nova_compute[247443]: 2025-11-26 12:57:28.853 247447 DEBUG nova.scheduler.client.report [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Inventory has not changed for provider b5f91a62-c356-4895-a9c1-523d85f8751b based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 12:57:28 compute-0 nova_compute[247443]: 2025-11-26 12:57:28.854 247447 DEBUG nova.compute.resource_tracker [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 12:57:28 compute-0 nova_compute[247443]: 2025-11-26 12:57:28.855 247447 DEBUG oslo_concurrency.lockutils [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:57:29 compute-0 ceph-mon[74966]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:29 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1743192530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 12:57:29 compute-0 nova_compute[247443]: 2025-11-26 12:57:29.850 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:57:29 compute-0 nova_compute[247443]: 2025-11-26 12:57:29.850 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:57:29 compute-0 nova_compute[247443]: 2025-11-26 12:57:29.864 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:57:29 compute-0 nova_compute[247443]: 2025-11-26 12:57:29.864 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:57:29 compute-0 nova_compute[247443]: 2025-11-26 12:57:29.864 247447 DEBUG nova.compute.manager [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 12:57:30 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:30 compute-0 nova_compute[247443]: 2025-11-26 12:57:30.819 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:57:30 compute-0 nova_compute[247443]: 2025-11-26 12:57:30.820 247447 DEBUG oslo_service.periodic_task [None req-61207dff-cbfb-458b-b492-55490af079b7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 12:57:31 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:57:31 compute-0 ceph-mon[74966]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:32 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:33 compute-0 ceph-mon[74966]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:34 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:35 compute-0 ceph-mon[74966]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Optimize plan auto_2025-11-26_12:57:35
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [balancer INFO root] do_upmap
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'backups', '.mgr', 'vms', 'default.rgw.control']
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [balancer INFO root] prepared 0/10 changes
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:57:35 compute-0 ceph-mgr[75236]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 12:57:36 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:57:36 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:37 compute-0 ceph-mon[74966]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:38 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:39 compute-0 ceph-mon[74966]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:40 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:57:41 compute-0 sudo[252831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:57:41 compute-0 sudo[252831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:41 compute-0 sudo[252831]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:41 compute-0 sudo[252856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:57:41 compute-0 sudo[252856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:41 compute-0 sudo[252856]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:41 compute-0 sudo[252881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:57:41 compute-0 sudo[252881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:41 compute-0 sudo[252881]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:41 compute-0 sudo[252906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 12:57:41 compute-0 sudo[252906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:41 compute-0 ceph-mon[74966]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:41 compute-0 sudo[252906]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:57:41 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:57:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 12:57:41 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:57:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 12:57:41 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:57:41 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev a62ce6f9-dde9-44b4-a9b3-66f980949d00 does not exist
Nov 26 12:57:41 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 496e8c7f-bc3b-414a-be76-71b83801c08f does not exist
Nov 26 12:57:41 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev daea28d9-4964-4a72-837e-75a1fb2e70d3 does not exist
Nov 26 12:57:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 12:57:41 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:57:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 12:57:41 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:57:41 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:57:41 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:57:41 compute-0 sudo[252960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:57:41 compute-0 sudo[252960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:41 compute-0 sudo[252960]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:41 compute-0 sudo[252985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:57:41 compute-0 sudo[252985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:41 compute-0 sudo[252985]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:41 compute-0 sudo[253010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:57:41 compute-0 sudo[253010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:41 compute-0 sudo[253010]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:41 compute-0 sudo[253035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 12:57:41 compute-0 sudo[253035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:42 compute-0 podman[253091]: 2025-11-26 12:57:42.227824019 +0000 UTC m=+0.029510228 container create 9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:57:42 compute-0 systemd[1]: Started libpod-conmon-9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d.scope.
Nov 26 12:57:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:57:42 compute-0 podman[253091]: 2025-11-26 12:57:42.292533666 +0000 UTC m=+0.094219885 container init 9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 12:57:42 compute-0 podman[253091]: 2025-11-26 12:57:42.297881246 +0000 UTC m=+0.099567455 container start 9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 12:57:42 compute-0 podman[253091]: 2025-11-26 12:57:42.299117367 +0000 UTC m=+0.100803576 container attach 9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:57:42 compute-0 goofy_cerf[253104]: 167 167
Nov 26 12:57:42 compute-0 systemd[1]: libpod-9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d.scope: Deactivated successfully.
Nov 26 12:57:42 compute-0 podman[253091]: 2025-11-26 12:57:42.302910326 +0000 UTC m=+0.104596615 container died 9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 12:57:42 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:42 compute-0 podman[253091]: 2025-11-26 12:57:42.216152257 +0000 UTC m=+0.017838486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:57:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-62bc4aaba12a7ed4ab637fb801dc10fc39ee0b361d76ea369ee32c5ddf33b07e-merged.mount: Deactivated successfully.
Nov 26 12:57:42 compute-0 podman[253091]: 2025-11-26 12:57:42.328221878 +0000 UTC m=+0.129908087 container remove 9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:57:42 compute-0 systemd[1]: libpod-conmon-9cb9b02f7a812bf77353e49668485370256c7a44f64f2c8715ecc4ef9545315d.scope: Deactivated successfully.
Nov 26 12:57:42 compute-0 podman[253125]: 2025-11-26 12:57:42.461277659 +0000 UTC m=+0.039416080 container create a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 12:57:42 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:57:42 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 12:57:42 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:57:42 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 12:57:42 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 12:57:42 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:57:42 compute-0 systemd[1]: Started libpod-conmon-a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600.scope.
Nov 26 12:57:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:57:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ce25852059e86d0588da0bee4bd34edf864d5e0e986c49cb93ea781783fa3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:57:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ce25852059e86d0588da0bee4bd34edf864d5e0e986c49cb93ea781783fa3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:57:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ce25852059e86d0588da0bee4bd34edf864d5e0e986c49cb93ea781783fa3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:57:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ce25852059e86d0588da0bee4bd34edf864d5e0e986c49cb93ea781783fa3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:57:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ce25852059e86d0588da0bee4bd34edf864d5e0e986c49cb93ea781783fa3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 12:57:42 compute-0 podman[253125]: 2025-11-26 12:57:42.531842162 +0000 UTC m=+0.109980593 container init a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:57:42 compute-0 podman[253125]: 2025-11-26 12:57:42.538012813 +0000 UTC m=+0.116151235 container start a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 12:57:42 compute-0 podman[253125]: 2025-11-26 12:57:42.539112166 +0000 UTC m=+0.117250587 container attach a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 12:57:42 compute-0 podman[253125]: 2025-11-26 12:57:42.446086754 +0000 UTC m=+0.024225185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:57:43 compute-0 lucid_raman[253139]: --> passed data devices: 0 physical, 3 LVM
Nov 26 12:57:43 compute-0 lucid_raman[253139]: --> relative data size: 1.0
Nov 26 12:57:43 compute-0 lucid_raman[253139]: --> All data devices are unavailable
Nov 26 12:57:43 compute-0 systemd[1]: libpod-a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600.scope: Deactivated successfully.
Nov 26 12:57:43 compute-0 podman[253168]: 2025-11-26 12:57:43.468155302 +0000 UTC m=+0.017941391 container died a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 12:57:43 compute-0 ceph-mon[74966]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9ce25852059e86d0588da0bee4bd34edf864d5e0e986c49cb93ea781783fa3f-merged.mount: Deactivated successfully.
Nov 26 12:57:43 compute-0 podman[253168]: 2025-11-26 12:57:43.504912629 +0000 UTC m=+0.054698697 container remove a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:57:43 compute-0 systemd[1]: libpod-conmon-a2bf487670d125733f02add188bf51fa2d9dc661f748aedcf305c250d3451600.scope: Deactivated successfully.
Nov 26 12:57:43 compute-0 sudo[253035]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:43 compute-0 sudo[253180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:57:43 compute-0 sudo[253180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:43 compute-0 sudo[253180]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:43 compute-0 sudo[253205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:57:43 compute-0 sudo[253205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:43 compute-0 sudo[253205]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:43 compute-0 sudo[253230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:57:43 compute-0 sudo[253230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:43 compute-0 sudo[253230]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:43 compute-0 sudo[253255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- lvm list --format json
Nov 26 12:57:43 compute-0 sudo[253255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:43 compute-0 podman[253310]: 2025-11-26 12:57:43.934718833 +0000 UTC m=+0.026312821 container create d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 12:57:43 compute-0 systemd[1]: Started libpod-conmon-d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8.scope.
Nov 26 12:57:43 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:57:43 compute-0 podman[253310]: 2025-11-26 12:57:43.98905649 +0000 UTC m=+0.080650497 container init d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_goldstine, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 12:57:43 compute-0 podman[253310]: 2025-11-26 12:57:43.994284514 +0000 UTC m=+0.085878503 container start d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 12:57:43 compute-0 podman[253310]: 2025-11-26 12:57:43.995454219 +0000 UTC m=+0.087048206 container attach d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:57:43 compute-0 great_goldstine[253323]: 167 167
Nov 26 12:57:43 compute-0 systemd[1]: libpod-d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8.scope: Deactivated successfully.
Nov 26 12:57:43 compute-0 podman[253310]: 2025-11-26 12:57:43.999223234 +0000 UTC m=+0.090817222 container died d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_goldstine, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:57:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fd16bc8da6851179e9e90e0e2450eb2208ee52d819ecc4b2243b0e008d31bcf-merged.mount: Deactivated successfully.
Nov 26 12:57:44 compute-0 podman[253310]: 2025-11-26 12:57:44.018185919 +0000 UTC m=+0.109779907 container remove d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_goldstine, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 12:57:44 compute-0 podman[253310]: 2025-11-26 12:57:43.924167915 +0000 UTC m=+0.015761893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:57:44 compute-0 systemd[1]: libpod-conmon-d57919da81662f6ef16d94dbc1a5aa9752bae52cc3f43af906786d3a766a01a8.scope: Deactivated successfully.
Nov 26 12:57:44 compute-0 podman[253345]: 2025-11-26 12:57:44.144099996 +0000 UTC m=+0.029462398 container create 15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 12:57:44 compute-0 systemd[1]: Started libpod-conmon-15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2.scope.
Nov 26 12:57:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82874c3e83add2cd5cc698f6adebfd8851a6935c7de5f776860c04f9b4d661a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82874c3e83add2cd5cc698f6adebfd8851a6935c7de5f776860c04f9b4d661a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82874c3e83add2cd5cc698f6adebfd8851a6935c7de5f776860c04f9b4d661a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82874c3e83add2cd5cc698f6adebfd8851a6935c7de5f776860c04f9b4d661a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:57:44 compute-0 podman[253345]: 2025-11-26 12:57:44.211485798 +0000 UTC m=+0.096848210 container init 15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_varahamihira, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 12:57:44 compute-0 podman[253345]: 2025-11-26 12:57:44.217404986 +0000 UTC m=+0.102767388 container start 15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:57:44 compute-0 podman[253345]: 2025-11-26 12:57:44.218316935 +0000 UTC m=+0.103679337 container attach 15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 12:57:44 compute-0 podman[253345]: 2025-11-26 12:57:44.133487481 +0000 UTC m=+0.018849904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:57:44 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]: {
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:     "0": [
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:         {
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "devices": [
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "/dev/loop3"
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             ],
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "lv_name": "ceph_lv0",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "lv_size": "21470642176",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ef2b480d-9484-4a2f-b46e-f0af80cc4943,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "lv_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "name": "ceph_lv0",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "tags": {
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.block_uuid": "QUenTb-BOcJ-bdE0-0K5q-0ycW-vgNR-uzbHj0",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.cluster_name": "ceph",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.crush_device_class": "",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.encrypted": "0",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.osd_fsid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.osd_id": "0",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.type": "block",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.vdo": "0"
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             },
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "type": "block",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "vg_name": "ceph_vg0"
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:         }
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:     ],
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:     "1": [
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:         {
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "devices": [
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "/dev/loop4"
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             ],
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "lv_name": "ceph_lv1",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "lv_size": "21470642176",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=241a5bb6-a0a2-4f46-939e-db435256704f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "lv_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "name": "ceph_lv1",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "tags": {
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.block_uuid": "NTj8AO-R44P-3MXA-02nz-NTzn-QFKW-ukkeK1",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.cluster_name": "ceph",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.crush_device_class": "",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.encrypted": "0",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.osd_fsid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.osd_id": "1",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.type": "block",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.vdo": "0"
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             },
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "type": "block",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "vg_name": "ceph_vg1"
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:         }
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:     ],
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:     "2": [
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:         {
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "devices": [
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "/dev/loop5"
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             ],
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "lv_name": "ceph_lv2",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "lv_size": "21470642176",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f7d7fe93-41e5-51c4-b72d-63b38686102e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=830db782-65d7-4e18-bccf-dab0d5334a8b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "lv_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "name": "ceph_lv2",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "tags": {
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.block_uuid": "hklteZ-Q9LE-H3lt-dH1r-1uyS-8NFa-DcTf4P",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.cluster_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.cluster_name": "ceph",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.crush_device_class": "",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.encrypted": "0",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.osd_fsid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.osd_id": "2",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.type": "block",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:                 "ceph.vdo": "0"
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             },
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "type": "block",
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:             "vg_name": "ceph_vg2"
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:         }
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]:     ]
Nov 26 12:57:44 compute-0 zealous_varahamihira[253358]: }
Nov 26 12:57:44 compute-0 systemd[1]: libpod-15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2.scope: Deactivated successfully.
Nov 26 12:57:44 compute-0 podman[253345]: 2025-11-26 12:57:44.867320295 +0000 UTC m=+0.752682707 container died 15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_varahamihira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 12:57:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-82874c3e83add2cd5cc698f6adebfd8851a6935c7de5f776860c04f9b4d661a1-merged.mount: Deactivated successfully.
Nov 26 12:57:44 compute-0 podman[253345]: 2025-11-26 12:57:44.899649917 +0000 UTC m=+0.785012319 container remove 15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_varahamihira, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 12:57:44 compute-0 systemd[1]: libpod-conmon-15f5387ee47a95cdb46f1668492198a646b8a22e68eafa37756a25153472ceb2.scope: Deactivated successfully.
Nov 26 12:57:44 compute-0 sudo[253255]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:44 compute-0 sudo[253377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:57:44 compute-0 sudo[253377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:44 compute-0 sudo[253377]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:45 compute-0 sudo[253402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 12:57:45 compute-0 sudo[253402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:45 compute-0 sudo[253402]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:45 compute-0 sudo[253427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:57:45 compute-0 sudo[253427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:45 compute-0 sudo[253427]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:45 compute-0 sudo[253452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f7d7fe93-41e5-51c4-b72d-63b38686102e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f7d7fe93-41e5-51c4-b72d-63b38686102e -- raw list --format json
Nov 26 12:57:45 compute-0 sudo[253452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:57:45 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:57:45 compute-0 podman[253509]: 2025-11-26 12:57:45.35898348 +0000 UTC m=+0.030641581 container create c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ishizaka, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 12:57:45 compute-0 systemd[1]: Started libpod-conmon-c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33.scope.
Nov 26 12:57:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:57:45 compute-0 podman[253509]: 2025-11-26 12:57:45.419316467 +0000 UTC m=+0.090974578 container init c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 12:57:45 compute-0 podman[253509]: 2025-11-26 12:57:45.424680508 +0000 UTC m=+0.096338609 container start c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ishizaka, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 12:57:45 compute-0 podman[253509]: 2025-11-26 12:57:45.425935624 +0000 UTC m=+0.097593746 container attach c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 12:57:45 compute-0 jolly_ishizaka[253522]: 167 167
Nov 26 12:57:45 compute-0 systemd[1]: libpod-c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33.scope: Deactivated successfully.
Nov 26 12:57:45 compute-0 conmon[253522]: conmon c2d5bdf53f9baf23639c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33.scope/container/memory.events
Nov 26 12:57:45 compute-0 podman[253509]: 2025-11-26 12:57:45.346690126 +0000 UTC m=+0.018348248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:57:45 compute-0 podman[253527]: 2025-11-26 12:57:45.459371461 +0000 UTC m=+0.019723590 container died c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ishizaka, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:57:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c32b38a24f96f30cba06af8e2834c24412aa0934413a1785b83710ef9d9d6a3b-merged.mount: Deactivated successfully.
Nov 26 12:57:45 compute-0 ceph-mon[74966]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:45 compute-0 podman[253527]: 2025-11-26 12:57:45.476390732 +0000 UTC m=+0.036742831 container remove c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 12:57:45 compute-0 systemd[1]: libpod-conmon-c2d5bdf53f9baf23639c7cc9603d206b93eaec8f2b14ca7595556d2c56a16c33.scope: Deactivated successfully.
Nov 26 12:57:45 compute-0 podman[253545]: 2025-11-26 12:57:45.604863653 +0000 UTC m=+0.028815428 container create 66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mestorf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 12:57:45 compute-0 systemd[1]: Started libpod-conmon-66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c.scope.
Nov 26 12:57:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 12:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a36b7f0788a5fa30d4289f823aa6d81c47208d89779505fafd6568cb82549b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 12:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a36b7f0788a5fa30d4289f823aa6d81c47208d89779505fafd6568cb82549b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 12:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a36b7f0788a5fa30d4289f823aa6d81c47208d89779505fafd6568cb82549b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 12:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a36b7f0788a5fa30d4289f823aa6d81c47208d89779505fafd6568cb82549b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 12:57:45 compute-0 podman[253545]: 2025-11-26 12:57:45.674772892 +0000 UTC m=+0.098724677 container init 66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mestorf, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:57:45 compute-0 podman[253545]: 2025-11-26 12:57:45.679710669 +0000 UTC m=+0.103662435 container start 66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:57:45 compute-0 podman[253545]: 2025-11-26 12:57:45.680905362 +0000 UTC m=+0.104857127 container attach 66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mestorf, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 12:57:45 compute-0 podman[253545]: 2025-11-26 12:57:45.593088747 +0000 UTC m=+0.017040533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 12:57:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:57:46 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:46 compute-0 charming_mestorf[253558]: {
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:     "241a5bb6-a0a2-4f46-939e-db435256704f": {
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:         "osd_id": 1,
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:         "osd_uuid": "241a5bb6-a0a2-4f46-939e-db435256704f",
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:         "type": "bluestore"
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:     },
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:     "830db782-65d7-4e18-bccf-dab0d5334a8b": {
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:         "osd_id": 2,
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:         "osd_uuid": "830db782-65d7-4e18-bccf-dab0d5334a8b",
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:         "type": "bluestore"
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:     },
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:     "ef2b480d-9484-4a2f-b46e-f0af80cc4943": {
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:         "ceph_fsid": "f7d7fe93-41e5-51c4-b72d-63b38686102e",
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:         "osd_id": 0,
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:         "osd_uuid": "ef2b480d-9484-4a2f-b46e-f0af80cc4943",
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:         "type": "bluestore"
Nov 26 12:57:46 compute-0 charming_mestorf[253558]:     }
Nov 26 12:57:46 compute-0 charming_mestorf[253558]: }
Nov 26 12:57:46 compute-0 systemd[1]: libpod-66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c.scope: Deactivated successfully.
Nov 26 12:57:46 compute-0 podman[253545]: 2025-11-26 12:57:46.455149531 +0000 UTC m=+0.879101296 container died 66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mestorf, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 12:57:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-20a36b7f0788a5fa30d4289f823aa6d81c47208d89779505fafd6568cb82549b-merged.mount: Deactivated successfully.
Nov 26 12:57:46 compute-0 podman[253545]: 2025-11-26 12:57:46.489187161 +0000 UTC m=+0.913138937 container remove 66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 12:57:46 compute-0 systemd[1]: libpod-conmon-66a7c3a46c4ed2441e33a2ffccd2c408ac9965de068f92252013c070e060ac5c.scope: Deactivated successfully.
Nov 26 12:57:46 compute-0 sudo[253452]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 12:57:46 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:57:46 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 12:57:46 compute-0 ceph-mon[74966]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:57:46 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 64c196d9-f8f4-41f3-8a3d-9d55f49328b1 does not exist
Nov 26 12:57:46 compute-0 ceph-mgr[75236]: [progress WARNING root] complete: ev 1484eeaf-69e9-4dd4-b65e-81924193db24 does not exist
Nov 26 12:57:46 compute-0 sudo[253601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 12:57:46 compute-0 sudo[253601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:46 compute-0 sudo[253601]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:46 compute-0 sudo[253626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 12:57:46 compute-0 sudo[253626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 12:57:46 compute-0 sudo[253626]: pam_unix(sudo:session): session closed for user root
Nov 26 12:57:47 compute-0 ceph-mon[74966]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:47 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:57:47 compute-0 ceph-mon[74966]: from='mgr.14132 192.168.122.100:0/1849810487' entity='mgr.compute-0.whkbdn' 
Nov 26 12:57:48 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:49 compute-0 sshd-session[253651]: Accepted publickey for zuul from 192.168.122.10 port 47572 ssh2: ECDSA SHA256:oYqKaXpw3UXfGsjV9kVmxjhHDxrL0kntNo/c2mjveus
Nov 26 12:57:49 compute-0 systemd-logind[777]: New session 51 of user zuul.
Nov 26 12:57:49 compute-0 systemd[1]: Started Session 51 of User zuul.
Nov 26 12:57:49 compute-0 sshd-session[253651]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 12:57:49 compute-0 ceph-mon[74966]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:49 compute-0 sudo[253655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 26 12:57:49 compute-0 sudo[253655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 12:57:50 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:50 compute-0 ceph-mon[74966]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:51 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:57:51 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14395 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:57:51 compute-0 podman[253842]: 2025-11-26 12:57:51.885328119 +0000 UTC m=+0.047446629 container health_status fb911699b7b55af6d0f3d30a2bc4433387ff957fff964072cc3b14a0675b0636 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, config_id=multipathd, managed_by=edpm_ansible)
Nov 26 12:57:51 compute-0 podman[253841]: 2025-11-26 12:57:51.90736053 +0000 UTC m=+0.069673135 container health_status 5a1efd0ce794c338d22cd8f5b4e49bfb744eda2579a7e4e187e451dd502098ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 12:57:51 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14397 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:57:52 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 26 12:57:52 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/678039320' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 26 12:57:52 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:52 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/678039320' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 26 12:57:53 compute-0 ceph-mon[74966]: from='client.14395 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:57:53 compute-0 ceph-mon[74966]: from='client.14397 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:57:53 compute-0 ceph-mon[74966]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:54 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:55 compute-0 ceph-mon[74966]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:56 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:57:56 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:56 compute-0 podman[253955]: 2025-11-26 12:57:56.893296151 +0000 UTC m=+0.056308863 container health_status 4d3503eccbdc24d2016d79b1ef2fb2071be79196f59e5dcd11e326a0e8c896a0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 12:57:57 compute-0 ceph-mon[74966]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:58 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:58 compute-0 ovs-vsctl[254024]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 26 12:57:59 compute-0 ceph-mon[74966]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:57:59 compute-0 virtqemud[247331]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 26 12:57:59 compute-0 virtqemud[247331]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 26 12:57:59 compute-0 virtqemud[247331]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 26 12:58:00 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: cache status {prefix=cache status} (starting...)
Nov 26 12:58:00 compute-0 lvm[254326]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 12:58:00 compute-0 lvm[254326]: VG ceph_vg2 finished
Nov 26 12:58:00 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: client ls {prefix=client ls} (starting...)
Nov 26 12:58:00 compute-0 lvm[254338]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 12:58:00 compute-0 lvm[254338]: VG ceph_vg1 finished
Nov 26 12:58:00 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:00 compute-0 lvm[254378]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 12:58:00 compute-0 lvm[254378]: VG ceph_vg0 finished
Nov 26 12:58:00 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14401 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:00 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: damage ls {prefix=damage ls} (starting...)
Nov 26 12:58:00 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14403 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:00 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: dump loads {prefix=dump loads} (starting...)
Nov 26 12:58:01 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 26 12:58:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:58:01 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 26 12:58:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 26 12:58:01 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801038007' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 26 12:58:01 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 26 12:58:01 compute-0 ceph-mon[74966]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:01 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/801038007' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 26 12:58:01 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14409 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:01 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:58:01.445+0000 7f35d37a6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 12:58:01 compute-0 ceph-mgr[75236]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 12:58:01 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 26 12:58:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 12:58:01 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1225205702' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:58:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:58:01.731 159053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 12:58:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:58:01.732 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 12:58:01 compute-0 ovn_metadata_agent[159048]: 2025-11-26 12:58:01.732 159053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 12:58:01 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 26 12:58:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 26 12:58:01 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4292323180' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 26 12:58:01 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 26 12:58:01 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670901816' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 26 12:58:01 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 26 12:58:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 26 12:58:02 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568460546' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 12:58:02 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: ops {prefix=ops} (starting...)
Nov 26 12:58:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 26 12:58:02 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159389430' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 26 12:58:02 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:02 compute-0 ceph-mon[74966]: from='client.14401 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:02 compute-0 ceph-mon[74966]: from='client.14403 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:02 compute-0 ceph-mon[74966]: from='client.14409 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:02 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1225205702' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 12:58:02 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/4292323180' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 26 12:58:02 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/670901816' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 26 12:58:02 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3568460546' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 12:58:02 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/4159389430' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 26 12:58:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 26 12:58:02 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2137303050' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 12:58:02 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14423 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:02 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: session ls {prefix=session ls} (starting...)
Nov 26 12:58:02 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 26 12:58:02 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2442166289' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 12:58:02 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14427 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:02 compute-0 ceph-mds[99300]: mds.cephfs.compute-0.ipyiim asok_command: status {prefix=status} (starting...)
Nov 26 12:58:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 12:58:03 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1702644273' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 12:58:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 26 12:58:03 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1210058228' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 26 12:58:03 compute-0 ceph-mon[74966]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:03 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2137303050' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 12:58:03 compute-0 ceph-mon[74966]: from='client.14423 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:03 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2442166289' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 12:58:03 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1702644273' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 12:58:03 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1210058228' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 26 12:58:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 26 12:58:03 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/962523951' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 26 12:58:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 26 12:58:03 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713095360' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 26 12:58:03 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 26 12:58:03 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/333546072' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 12:58:03 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14439 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:03 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:58:03.874+0000 7f35d37a6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 26 12:58:03 compute-0 ceph-mgr[75236]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 26 12:58:04 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14441 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:04 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 26 12:58:04 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/169523798' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 26 12:58:04 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:04 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14445 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:04 compute-0 ceph-mon[74966]: from='client.14427 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:04 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/962523951' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 26 12:58:04 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3713095360' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 26 12:58:04 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/333546072' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 12:58:04 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/169523798' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 26 12:58:04 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 26 12:58:04 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2978914446' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 26 12:58:04 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14449 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:04 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 26 12:58:04 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/490509282' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 12:58:05 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14453 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 26 12:58:05 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2338340369' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 12:58:05 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14457 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:05 compute-0 ceph-mon[74966]: from='client.14439 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:05 compute-0 ceph-mon[74966]: from='client.14441 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:05 compute-0 ceph-mon[74966]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:05 compute-0 ceph-mon[74966]: from='client.14445 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:05 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2978914446' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 26 12:58:05 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/490509282' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 12:58:05 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2338340369' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 12:58:05 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 26 12:58:05 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162174485' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 12:58:05 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14461 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 44 ms_handle_reset con 0x5640f203e000 session 0x5640f1f93860
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:13.802682+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 57737216 unmapped: 2056192 heap: 59793408 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 331679 data_alloc: 218103808 data_used: 36864
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:14.802787+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:39:44.708435+0000 osd.2 (osd.2) 50 : cluster [DBG] 3.7 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:39:44.722603+0000 osd.2 (osd.2) 51 : cluster [DBG] 3.7 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 57737216 unmapped: 2056192 heap: 59793408 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 51) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:39:44.708435+0000 osd.2 (osd.2) 50 : cluster [DBG] 3.7 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:39:44.722603+0000 osd.2 (osd.2) 51 : cluster [DBG] 3.7 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:15.802907+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:39:45.721471+0000 osd.2 (osd.2) 52 : cluster [DBG] 3.5 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:39:45.735642+0000 osd.2 (osd.2) 53 : cluster [DBG] 3.5 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 44 handle_osd_map epochs [45,46], i have 44, src has [1,46]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 57802752 unmapped: 1990656 heap: 59793408 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 53) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:39:45.721471+0000 osd.2 (osd.2) 52 : cluster [DBG] 3.5 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:39:45.735642+0000 osd.2 (osd.2) 53 : cluster [DBG] 3.5 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:16.803071+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 57802752 unmapped: 1990656 heap: 59793408 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe153000/0x0/0x4ffc00000, data 0x36941/0x79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 46 handle_osd_map epochs [47,48], i have 46, src has [1,48]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=36) [2] r=0 lpr=36 crt=44'64 lcod 44'63 mlcod 44'63 active+clean] exit Started/Primary/Active/Clean 42.993061 25 0.000069
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=36) [2] r=0 lpr=36 crt=44'64 lcod 44'63 mlcod 44'63 active mbc={}] exit Started/Primary/Active 42.994247 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=36) [2] r=0 lpr=36 crt=44'64 lcod 44'63 mlcod 44'63 active mbc={}] exit Started/Primary 43.775537 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=36) [2] r=0 lpr=36 crt=44'64 lcod 44'63 mlcod 44'63 active mbc={}] exit Started 43.775726 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=36) [2] r=0 lpr=36 crt=44'64 lcod 44'63 mlcod 44'63 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 44'63 active pruub 96.626022339s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.2(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.3(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.4(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.5(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.6(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.7(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.8(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.9(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.a(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.b(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.c(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.d(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.e(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.f(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.10(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.11(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.12(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.13(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.14(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.15(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.16(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.17(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.18(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.19(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1a(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1b(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1c(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1d(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1e(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1f(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] exit Reset 0.001773 2 0.000343
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] exit Start 0.000090 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] exit Started/Primary/Peering/GetLog 0.000032 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.001935 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.001881 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000157 2 0.000076
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000041 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002628 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000267 2 0.000071
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002463 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000115 2 0.000101
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002332 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000096 2 0.000068
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000074 2 0.000041
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002474 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002493 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000044 2 0.000136
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002613 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000171 2 0.000144
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000039 2 0.000038
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003489 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003288 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003134 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003311 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003593 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.003778 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003583 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003406 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003390 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003549 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003040 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003028 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002991 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002976 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002957 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002952 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002940 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000610 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000615 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000030 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000625 2 0.000037
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000632 2 0.000047
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.002979 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000087 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003057 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000671 2 0.000050
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000710 2 0.000036
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000699 2 0.000033
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003105 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003117 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000736 2 0.000034
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000753 2 0.000034
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000768 2 0.000030
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000017 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003394 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000923 2 0.000035
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000927 2 0.000032
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000961 2 0.000033
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000974 2 0.000031
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000975 2 0.000034
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Initial 0.003442 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.001002 2 0.000031
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000017 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 47 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=0 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.001041 2 0.000075
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.001065 2 0.000033
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000046 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.001067 2 0.000080
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000052 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000990 2 0.000112
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000057 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.001027 2 0.000034
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000928 2 0.000058
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000691 2 0.000311
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000088 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000719 2 0.000032
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000047 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Reset 0.000514 2 0.000233
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000052 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 48 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:17.803254+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:39:47.647274+0000 osd.2 (osd.2) 54 : cluster [DBG] 3.8 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:39:47.661403+0000 osd.2 (osd.2) 55 : cluster [DBG] 3.8 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 58368000 unmapped: 1425408 heap: 59793408 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 55) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:39:47.647274+0000 osd.2 (osd.2) 54 : cluster [DBG] 3.8 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:39:47.661403+0000 osd.2 (osd.2) 55 : cluster [DBG] 3.8 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.594823 4 0.000066
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.594887 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595088 4 0.000215
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.595274 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.597916 4 0.000132
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.598018 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596385 4 0.000077
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596425 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595930 4 0.000132
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596023 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595469 4 0.000078
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596372 4 0.000077
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596431 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.595640 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595971 4 0.000064
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596022 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596764 4 0.000728
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.597459 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.597795 4 0.000101
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.597859 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595408 4 0.000102
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.595461 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595640 4 0.000048
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.595673 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596978 4 0.000721
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.597677 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595841 4 0.000084
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595754 4 0.000052
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.595899 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.595811 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.595538 4 0.000576
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596040 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596412 4 0.000054
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596445 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596998 4 0.000077
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.597056 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.598264 4 0.000056
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.598301 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596779 4 0.000069
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596815 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596896 4 0.000093
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596964 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.598459 4 0.000070
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.598505 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596769 4 0.000080
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596818 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.598284 4 0.000092
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.598332 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.598074 4 0.000133
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.598175 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] exit Started/Primary/Peering/WaitUpThru 0.599709 3 0.000311
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596909 4 0.000117
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596985 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 peering pruub 96.626022339s@ mbc={}] exit Started/Primary/Peering 0.599852 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.007348061s) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 unknown pruub 96.626022339s@ mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.597221 4 0.000189
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.597383 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596645 4 0.000049
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596678 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596669 4 0.000064
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596720 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596572 4 0.000131
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.596437 4 0.000210
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596580 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.596684 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=36/37 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001558 3 0.000139
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002833 3 0.000036
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002820 3 0.000039
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.3( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002976 3 0.000045
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003050 3 0.000038
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003036 3 0.000293
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003039 3 0.000035
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003257 3 0.000047
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003258 3 0.000072
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003258 3 0.000026
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1d( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003215 3 0.000029
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003231 3 0.000047
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003194 3 0.000035
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.18( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003194 3 0.000033
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003170 3 0.000034
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003171 3 0.000024
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.5( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003150 3 0.000032
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003135 3 0.000022
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003120 3 0.000036
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003106 3 0.000026
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.9( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003078 3 0.000026
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003093 3 0.000033
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003416 3 0.000169
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003099 3 0.000031
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003111 3 0.000065
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.0( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=36/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 44'63 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003058 3 0.000042
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003030 3 0.000052
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002982 3 0.000032
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.15( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002957 3 0.000035
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000049 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004075 3 0.000161
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.14( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.1b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000190 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.c( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003981 3 0.000041
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003362 3 0.000186
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 49 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/36 les/c/f=49/37/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.922587395s of 10.002945900s, submitted: 197
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:18.803410+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:39:48.625883+0000 osd.2 (osd.2) 56 : cluster [DBG] 3.16 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:39:48.640026+0000 osd.2 (osd.2) 57 : cluster [DBG] 3.16 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 58662912 unmapped: 1130496 heap: 59793408 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 394302 data_alloc: 218103808 data_used: 36864
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 57) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:39:48.625883+0000 osd.2 (osd.2) 56 : cluster [DBG] 3.16 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:39:48.640026+0000 osd.2 (osd.2) 57 : cluster [DBG] 3.16 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 49 heartbeat osd_stat(store_statfs(0x4fe14a000/0x0/0x4ffc00000, data 0x3ba96/0x82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:19.803568+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 58679296 unmapped: 1114112 heap: 59793408 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000021
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001535 1 0.000459
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000110 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000043 1 0.000070
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000054 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000094 1 0.000172
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000037
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000046 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000103 1 0.000407
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000282 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000109
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000175 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000103 1 0.000344
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000038
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000077 1 0.000111
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000035
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000045 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000131
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000205 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000033
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000113 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000091 1 0.000209
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000096 1 0.000028
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000018 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000020
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000037
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000053 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000191
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000249 1 0.000263
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000086 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000207
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.028220 1 0.000029
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029844 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.624770 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.624797 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026311 1 0.000012
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029417 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.625109 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.625124 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973402023s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222656250s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973369598s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] exit Reset 0.000047 1 0.000077
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973369598s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973369598s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973369598s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973369598s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973369598s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active+clean] exit Started/Primary/Active/Clean 2.026719 1 0.000021
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary/Active 2.029725 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary 2.627752 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started 2.627769 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973158836s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.222549438s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973132133s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.222549438s@ mbc={}] exit Reset 0.000038 1 0.000063
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973132133s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.222549438s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973132133s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.222549438s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973132133s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.222549438s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973132133s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.222549438s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973132133s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.222549438s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971399307s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.220855713s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.025792 1 0.000018
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029882 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.626319 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.626351 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972857475s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222450256s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972840309s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222450256s@ mbc={}] exit Reset 0.000030 1 0.000130
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972840309s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222450256s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972840309s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222450256s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972840309s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222450256s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972840309s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222450256s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972840309s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222450256s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026809 1 0.000013
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029884 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.625916 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.625929 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972937584s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222656250s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972923279s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] exit Reset 0.000025 1 0.000045
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972923279s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972923279s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972923279s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972923279s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972923279s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222656250s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026864 1 0.000431
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029935 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627402 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.627417 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972858429s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222679138s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972841263s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222679138s@ mbc={}] exit Reset 0.000032 1 0.000050
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972841263s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222679138s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972841263s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222679138s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972841263s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222679138s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972841263s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222679138s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972841263s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222679138s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026370 1 0.000028
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026677 1 0.000012
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029959 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627828 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.627848 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972868919s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222824097s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972855568s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222824097s@ mbc={}] exit Reset 0.000023 1 0.000041
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972855568s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222824097s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972855568s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222824097s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972855568s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222824097s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972855568s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222824097s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972855568s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222824097s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029848 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627652 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.627686 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026724 1 0.000016
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029971 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.625880 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.625900 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.973010063s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223129272s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026756 1 0.000017
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.029975 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.626434 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.626446 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972805977s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222976685s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972786903s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222976685s@ mbc={}] exit Reset 0.000027 1 0.000038
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972786903s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222976685s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972786903s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222976685s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972786903s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222976685s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972786903s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222976685s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972786903s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222976685s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972826004s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222915649s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972607613s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222915649s@ mbc={}] exit Reset 0.000231 1 0.000254
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972607613s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222915649s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972607613s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222915649s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972607613s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222915649s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972607613s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222915649s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026898 1 0.000019
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972607613s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222915649s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.030078 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.626905 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.626921 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972652435s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223014832s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972627640s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223014832s@ mbc={}] exit Reset 0.000040 1 0.000067
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972763062s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223129272s@ mbc={}] exit Reset 0.000305 1 0.000465
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972763062s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223129272s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972763062s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223129272s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972763062s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223129272s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972627640s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223014832s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972763062s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223129272s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026999 1 0.000013
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972627640s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223014832s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972627640s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223014832s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.030157 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972627640s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223014832s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627132 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972627640s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223014832s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.627147 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972572327s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223045349s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972557068s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223045349s@ mbc={}] exit Reset 0.000028 1 0.000048
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972557068s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223045349s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972557068s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223045349s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972557068s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223045349s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972557068s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223045349s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972557068s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223045349s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.027046 1 0.000018
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.030216 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.628747 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.628762 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active+clean] exit Started/Primary/Active/Clean 2.027114 1 0.000013
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary/Active 2.030240 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972480774s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223052979s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary 2.627067 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started 2.627079 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972464561s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223052979s@ mbc={}] exit Reset 0.000028 1 0.000084
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972464561s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223052979s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972464561s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223052979s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972464561s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223052979s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972475052s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223068237s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972464561s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223052979s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972464561s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223052979s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972452164s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223068237s@ mbc={}] exit Reset 0.000034 1 0.000047
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972763062s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223129272s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972452164s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223068237s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972452164s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223068237s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972452164s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223068237s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active+clean] exit Started/Primary/Active/Clean 2.026962 1 0.000249
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972452164s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223068237s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972452164s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223068237s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary/Active 2.030299 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary 2.628483 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started 2.628498 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972394943s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223098755s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972373962s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223098755s@ mbc={}] exit Reset 0.000032 1 0.000050
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972373962s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223098755s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972373962s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223098755s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972373962s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223098755s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972373962s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223098755s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972373962s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223098755s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.027192 1 0.000015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.030313 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627307 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.627324 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972376823s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223136902s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.027210 1 0.000011
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.030292 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972361565s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223136902s@ mbc={}] exit Reset 0.000035 1 0.000041
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627693 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972361565s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223136902s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.627718 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972361565s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223136902s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972361565s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223136902s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972361565s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223136902s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972361565s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223136902s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972357750s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223175049s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972344398s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223175049s@ mbc={}] exit Reset 0.000023 1 0.000038
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972344398s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223175049s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972344398s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223175049s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972344398s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223175049s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972344398s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223175049s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972344398s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223175049s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active+clean] exit Started/Primary/Active/Clean 2.027204 1 0.000069
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary/Active 2.030320 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary 2.627025 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started 2.627038 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972307205s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223182678s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972287178s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223182678s@ mbc={}] exit Reset 0.000030 1 0.000047
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972287178s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223182678s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972287178s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223182678s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972287178s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223182678s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972287178s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223182678s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972287178s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223182678s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.026858 1 0.000016
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.030719 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627783 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.627797 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972043037s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.222984314s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972032547s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222984314s@ mbc={}] exit Reset 0.000020 1 0.000700
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972032547s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222984314s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972032547s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222984314s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972032547s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222984314s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972032547s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222984314s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972032547s) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.222984314s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active+clean] exit Started/Primary/Active/Clean 2.027756 1 0.000011
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary/Active 2.030779 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started/Primary 2.627524 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] exit Started 2.627549 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 44'64 mlcod 44'64 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971755981s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 44'64 active pruub 100.223205566s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971536636s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223205566s@ mbc={}] exit Reset 0.000250 1 0.000318
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971536636s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223205566s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971536636s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223205566s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971536636s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223205566s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971536636s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223205566s@ mbc={}] exit Start 0.000048 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971536636s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY pruub 100.223205566s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.027765 1 0.000023
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.031214 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.627981 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.628019 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.972020149s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223991394s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971908569s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223991394s@ mbc={}] exit Reset 0.000154 1 0.000207
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971908569s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223991394s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971908569s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223991394s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971908569s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223991394s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971908569s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223991394s@ mbc={}] exit Start 0.000048 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971908569s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223991394s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.028928 1 0.000016
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 2.031918 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 2.628508 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000115 1 0.000025
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.628575 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=47) [2] r=0 lpr=47 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970626831s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active pruub 100.223220825s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971295357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.220855713s@ mbc={}] exit Reset 0.003439 1 0.000402
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971295357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.220855713s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971295357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.220855713s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971295357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.220855713s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971295357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.220855713s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.971295357s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.220855713s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000016 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970546722s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223220825s@ mbc={}] exit Reset 0.000095 1 0.000127
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970546722s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223220825s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970546722s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223220825s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970546722s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223220825s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970546722s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223220825s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50 pruub=13.970546722s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.223220825s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000125 1 0.000026
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000029 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000035
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000059 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000031 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000037
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000097 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000187
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000085 1 0.000021
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001051 1 0.000467
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000130
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000580 1 0.000031
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000012
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000337 1 0.000026
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000075 1 0.000115
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000012
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000048 1 0.000020
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000066
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000048 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000083 1 0.000192
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000036
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000121 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000073 1 0.000214
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000030
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000048 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000143
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000080 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000022
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000111 1 0.000023
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000019 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000072 1 0.000019
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000058 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000137
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000101 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000173 1 0.000333
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000277 1 0.000292
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000435 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000084 1 0.000747
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000029 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000061
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000056 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000144
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000116 1 0.000030
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000038
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000062 1 0.000058
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000041 1 0.000049
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000219 1 0.000023
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000203 1 0.000211
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000155 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000010
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000024 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 50 handle_osd_map epochs [50,50], i have 50, src has [1,50]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000927 1 0.000044
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020906 2 0.000054
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019589 2 0.000056
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019581 2 0.000787
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018698 2 0.000053
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018441 2 0.000057
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017545 2 0.000090
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017155 2 0.000111
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016818 2 0.000115
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016163 2 0.000046
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018061 2 0.000029
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003736 1 0.000261
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000034
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020448 2 0.000060
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014993 2 0.000106
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013304 2 0.000043
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013698 2 0.000502
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013193 2 0.001043
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012514 2 0.000041
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012159 2 0.000287
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011873 2 0.000024
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011737 2 0.000064
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011295 2 0.000097
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009845 2 0.000072
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010267 2 0.000765
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009613 2 0.000051
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008749 2 0.000102
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015011 2 0.000028
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007976 2 0.000053
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007423 2 0.000027
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.007724 2 0.000773
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006867 2 0.004512
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000026 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006784 2 0.000028
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007383 2 0.000638
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003775 2 0.000028
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002450 2 0.000024
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002552 2 0.000068
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 50 heartbeat osd_stat(store_statfs(0x4fe14a000/0x0/0x4ffc00000, data 0x3ba96/0x82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:20.803699+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 50 handle_osd_map epochs [50,51], i have 50, src has [1,51]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 50 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 50 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894665 2 0.000023
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.917167 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894745 2 0.000015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.915269 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.892125 2 0.000021
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.907301 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895103 2 0.000013
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.914943 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895207 2 0.000016
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.914069 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909659 6 0.000081
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895364 2 0.000016
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.913944 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.893170 2 0.000019
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.913741 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.910123 6 0.000352
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894761 2 0.000048
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.909948 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.897280 2 0.000013
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.914982 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894868 2 0.000015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.909173 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.897435 2 0.000032
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.914679 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.897379 2 0.000035
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.913644 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913291 6 0.000123
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894989 2 0.000019
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894299 2 0.000044
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.905454 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.905773 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894344 2 0.000014
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.901825 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895245 2 0.000016
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.907771 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895478 2 0.000026
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.908614 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.897433 2 0.000013
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.915082 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896522 2 0.000025
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.915645 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895088 2 0.000031
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.903200 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895654 2 0.000017
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.904795 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894855 2 0.000019
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.901219 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895832 2 0.000017
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.905543 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895897 2 0.000015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.905878 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895960 2 0.000016
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.907389 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896389 2 0.000016
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.908216 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895680 2 0.000012
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.900416 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896590 2 0.000017
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.908652 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895764 2 0.000013
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.898296 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896819 2 0.000018
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910631 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896962 2 0.000048
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910397 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896355 2 0.000016
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.903929 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896383 2 0.000028
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.904218 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896623 2 0.000018
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.903667 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913711 7 0.000166
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.914648 7 0.000103
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 44'64 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005989 4 0.000118
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005470 4 0.000170
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005438 4 0.000084
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000047 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005276 4 0.000109
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005870 4 0.000244
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005195 4 0.000073
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005776 4 0.000181
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.917319 7 0.000227
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.915575 7 0.000498
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.916989 7 0.000071
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000032 1 0.000027
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005791 4 0.000036
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005676 4 0.000045
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005686 4 0.000035
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000651 1 0.000015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005742 4 0.000201
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005673 4 0.000107
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005690 4 0.000125
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005545 4 0.000262
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005291 4 0.000967
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005250 4 0.000023
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005203 4 0.000026
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005145 4 0.000100
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001001 1 0.000020
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005581 4 0.000080
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.918980 7 0.000043
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.916087 7 0.000620
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.916168 7 0.003158
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.918672 7 0.000090
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000037 1 0.000023
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000082 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.918421 7 0.000035
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000179 1 0.000009
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000252 1 0.000021
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000368 1 0.000010
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000505 1 0.000025
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.918887 7 0.000093
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000049 1 0.000250
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007207 4 0.000328
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009104 4 0.000038
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008809 4 0.001432
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009284 4 0.000037
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008694 4 0.000088
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008722 4 0.000228
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008678 4 0.000024
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009442 4 0.000037
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008587 4 0.000074
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008493 4 0.000082
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000034 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000052 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008381 4 0.000047
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=50/51 n=1 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.008273 5 0.000148
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007994 4 0.000108
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010301 4 0.000393
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [2] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.923946 7 0.000208
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.922765 7 0.000092
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000313 1 0.000026
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009398 4 0.000087
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.924763 7 0.000083
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.924128 7 0.000029
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.923148 7 0.000059
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.924389 7 0.000042
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.923723 7 0.000404
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.923814 7 0.000035
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.014662 3 0.000032
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.014679 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.008309 1 0.000033
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.008387 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.925894 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.013455 1 0.000038
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.014134 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.929819 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.020456 1 0.000018
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.021476 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.938491 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.027809 1 0.000017
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.027872 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.946879 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.035056 1 0.000026
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.035259 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.951371 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.17] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 909312 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.042442 1 0.000023
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.042722 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.958918 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.049689 1 0.000257
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.050079 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.968771 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.7] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.056769 1 0.000097
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.057306 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.975755 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.064026 1 0.000076
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.064109 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.983226 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.122765 1 0.000079
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [2] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122832 1 0.000022
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122880 1 0.000010
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122532 1 0.000027
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122611 1 0.000010
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122689 1 0.000015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122756 1 0.000014
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122821 1 0.000022
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.122861 1 0.000079
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.121632 1 0.000055
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.007576 1 0.000066
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.130446 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.054440 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.b] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014827 1 0.000098
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.137781 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.060576 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022195 1 0.000063
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.144774 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.069573 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029520 1 0.000065
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.152170 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.076317 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036779 1 0.000059
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.159525 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.082705 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.6] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044192 1 0.000056
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.166997 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.091410 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051450 1 0.000059
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.174324 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.098087 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058859 1 0.000051
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.181772 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.105648 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.1a] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.132806 2 0.000095
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.254475 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started 1.179323 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.9] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.329761 3 0.000053
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.329810 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000072 1 0.000116
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.013894 2 0.000115
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.014038 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started 1.253571 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.397407 3 0.000028
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.397433 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000046 1 0.000054
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.008547 2 0.000163
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.008657 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started 1.319428 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.d] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.605091 2 0.000054
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.605117 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000054 1 0.000057
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.008728 2 0.000184
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.008985 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started 1.527929 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:21.803826+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.897400 2 0.000025
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.897440 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000036 1 0.000064
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.008619 2 0.000160
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.008691 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] lb MIN local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=49'65 lcod 44'64 mlcod 0'0 active mbc={}] exit Started 1.820810 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 51 handle_osd_map epochs [51,52], i have 51, src has [1,52]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 1785856 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:22.803982+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 1785856 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 52 handle_osd_map epochs [52,53], i have 52, src has [1,53]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:23.804107+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:39:53.572713+0000 osd.2 (osd.2) 58 : cluster [DBG] 3.11 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:39:53.586861+0000 osd.2 (osd.2) 59 : cluster [DBG] 3.11 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 1695744 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 416168 data_alloc: 218103808 data_used: 36864
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 59) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:39:53.572713+0000 osd.2 (osd.2) 58 : cluster [DBG] 3.11 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:39:53.586861+0000 osd.2 (osd.2) 59 : cluster [DBG] 3.11 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:24.804219+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 53 heartbeat osd_stat(store_statfs(0x4fe13e000/0x0/0x4ffc00000, data 0x43651/0x8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 53 handle_osd_map epochs [54,54], i have 53, src has [1,54]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 53 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61267968 unmapped: 1671168 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:25.804350+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61300736 unmapped: 1638400 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 55 handle_osd_map epochs [55,56], i have 55, src has [1,56]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:26.804450+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 1835008 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 56 heartbeat osd_stat(store_statfs(0x4fe134000/0x0/0x4ffc00000, data 0x48bd5/0x98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:27.804589+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61112320 unmapped: 1826816 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:28.804716+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:39:58.560016+0000 osd.2 (osd.2) 60 : cluster [DBG] 3.e scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:39:58.574241+0000 osd.2 (osd.2) 61 : cluster [DBG] 3.e scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61095936 unmapped: 1843200 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 429363 data_alloc: 218103808 data_used: 40960
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 61) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:39:58.560016+0000 osd.2 (osd.2) 60 : cluster [DBG] 3.e scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:39:58.574241+0000 osd.2 (osd.2) 61 : cluster [DBG] 3.e scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.878599167s of 10.974405289s, submitted: 283
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:29.804872+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:39:59.600249+0000 osd.2 (osd.2) 62 : cluster [DBG] 3.18 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:39:59.614371+0000 osd.2 (osd.2) 63 : cluster [DBG] 3.18 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 61095936 unmapped: 1843200 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 57 handle_osd_map epochs [58,59], i have 57, src has [1,59]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 63) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:39:59.600249+0000 osd.2 (osd.2) 62 : cluster [DBG] 3.18 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:39:59.614371+0000 osd.2 (osd.2) 63 : cluster [DBG] 3.18 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000074 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000023 1 0.000045
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000081 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000116 1 0.000182
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000042 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000309 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000019
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000089 1 0.000034
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000127 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000021
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000089 1 0.000030
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000129 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=0 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000082 1 0.000040
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000112 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 59 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:30.805045+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:00.585138+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.1b deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:00.599237+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.1b deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 59 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.557621 2 0.000049
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.557959 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.559552 2 0.000216
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.558006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.558886 2 0.000046
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.560090 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.560213 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.559257 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.559273 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.557072 2 0.000036
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.557262 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000101 1 0.000314
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000117 1 0.000366
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.557306 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.001618 1 0.001864
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000793 1 0.001617
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000162 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000229 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 794624 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 65) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:00.585138+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.1b deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:00.599237+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.1b deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:31.805169+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:01.578926+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.1c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:01.593029+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.1c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 761856 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 67) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:01.578926+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.1c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:01.593029+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.1c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 60 heartbeat osd_stat(store_statfs(0x4fe127000/0x0/0x4ffc00000, data 0x4fb3b/0xa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.561043 5 0.000038
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.559191 5 0.000495
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.561221 5 0.000031
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.559428 5 0.000493
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: not registered w/ OSD
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 lc 40'63 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002377 4 0.000069
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 lc 40'63 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 lc 40'63 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000043 1 0.000065
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 lc 40'63 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: not registered w/ OSD
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.042730 1 0.000024
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 lc 40'53 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.044961 4 0.000057
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 lc 40'53 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 lc 40'53 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000105 1 0.000033
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 lc 40'53 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038535 1 0.000086
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 lc 40'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.084030 4 0.000149
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 lc 40'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 lc 40'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000030 1 0.000042
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 lc 40'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.031563 1 0.000024
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 lc 40'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.115602 4 0.000085
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 lc 40'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 lc 40'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000035 1 0.000054
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 lc 40'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038624 1 0.000021
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:32.805290+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 491520 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.499024 1 0.000024
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.582700 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.142447 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000081 1 0.000116
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.428795 1 0.000029
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.583127 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.144369 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000053 1 0.000083
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.538195 1 0.000025
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.583420 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.142994 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.468096 1 0.000027
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000336 1 0.000366
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.583781 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.144850 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] r=-1 lpr=60 pi=[45,60)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000202 1 0.000177
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 62 handle_osd_map epochs [62,62], i have 62, src has [1,62]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003119 2 0.000036
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004247 2 0.000033
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001161 2 0.000104
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000813 2 0.000037
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000020 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004643 2 0.000086
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.005329 2 0.000032
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000306 2 0.000037
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000279 2 0.000044
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000037 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=0 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=0 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000016
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000089 1 0.000037
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000021 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000121 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 62 handle_osd_map epochs [61,62], i have 62, src has [1,62]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=0 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=0 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000027
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000094 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=0 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=0 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000019
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000101 1 0.000049
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000152 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=0 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000059 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=0 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000080 1 0.000029
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000106 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe125000/0x0/0x4ffc00000, data 0x5192d/0xa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:33.805397+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 294912 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490619 data_alloc: 218103808 data_used: 53248
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 62 handle_osd_map epochs [62,63], i have 63, src has [1,63]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996693 2 0.000093
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001844 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995919 2 0.000090
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001605 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996998 2 0.000051
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001351 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.805797 2 0.000042
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.805925 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.805970 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.806560 2 0.000039
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.806691 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.806711 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000079 1 0.000189
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.805752 2 0.000062
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.805932 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.805963 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=0 lpr=62 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000058 1 0.000089
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.805631 2 0.000032
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.805748 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.805761 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=0 lpr=62 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000034 1 0.000048
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996628 2 0.000040
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001617 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000537 1 0.000547
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002062 3 0.000076
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002378 3 0.000065
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=6 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002226 3 0.000165
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 63 handle_osd_map epochs [63,63], i have 63, src has [1,63]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002987 3 0.000053
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/45 les/c/f=63/46/0 sis=62) [2] r=0 lpr=62 pi=[45,62)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 63 handle_osd_map epochs [63,63], i have 63, src has [1,63]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:34.805529+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:04.593581+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.1 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:04.607710+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.1 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 278528 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 69) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:04.593581+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.1 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:04.607710+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.1 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _renew_subs
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=3 mbc={}] exit Started/Stray 1.529094 5 0.000036
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=3 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=3 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.529290 5 0.000035
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.528972 5 0.000039
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.529691 5 0.000062
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: not registered w/ OSD
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: not registered w/ OSD
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002504 4 0.000102
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000032 1 0.000021
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: not registered w/ OSD
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.021680 1 0.000020
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.024014 4 0.000137
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000042 1 0.000047
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038613 1 0.000018
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 lc 40'44 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.062900 4 0.000178
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 lc 40'44 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 lc 40'44 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000051 1 0.000059
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 lc 40'44 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052913 1 0.000023
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 lc 40'47 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.115952 4 0.000533
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 lc 40'47 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 lc 40'47 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000043 1 0.000074
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 lc 40'47 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038479 1 0.000029
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:35.805701+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.259196 1 0.000024
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.321919 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 1.851631 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000052 1 0.000079
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.167718 1 0.000024
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.322271 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 1.851326 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.206438 1 0.000021
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.322378 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 1.851715 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[54,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000083 1 0.000115
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000050 1 0.000082
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.298388 1 0.000018
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.322666 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 1.851783 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] r=-1 lpr=63 pi=[53,63)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000039 1 0.000083
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003759 2 0.000089
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004361 2 0.000030
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004122 2 0.000023
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 65 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004092 2 0.000038
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=6
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=6
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000568 2 0.000044
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000533 2 0.000024
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000562 2 0.000013
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=13
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=13
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000542 2 0.000155
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 16384 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:36.805835+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:06.656416+0000 osd.2 (osd.2) 70 : cluster [DBG] 4.a scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:06.670521+0000 osd.2 (osd.2) 71 : cluster [DBG] 4.a scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 65 handle_osd_map epochs [65,66], i have 66, src has [1,66]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997420 2 0.000034
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002358 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997573 2 0.000021
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002348 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997701 2 0.000025
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002415 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997871 2 0.000032
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002255 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001688 3 0.000087
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003229 3 0.000096
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/53 les/c/f=66/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003104 3 0.000173
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/53 les/c/f=66/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/53 les/c/f=66/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/53 les/c/f=66/54/0 sis=65) [2] r=0 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003746 3 0.000463
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000056 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=65/66 n=6 ec=45/34 lis/c=65/54 les/c/f=66/55/0 sis=65) [2] r=0 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 1015808 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 66 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x58661/0xba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 71) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:06.656416+0000 osd.2 (osd.2) 70 : cluster [DBG] 4.a scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:06.670521+0000 osd.2 (osd.2) 71 : cluster [DBG] 4.a scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:37.806001+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 958464 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: handle_auth_request added challenge on 0x5640f3412800
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:38.806134+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 958464 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 537656 data_alloc: 218103808 data_used: 53248
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:39.806264+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 950272 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.402991295s of 10.517934799s, submitted: 132
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=0 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000049 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=0 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000022 1 0.000049
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000285 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000522 1 0.000382
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=0 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=0 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000034
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000099 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000069 1 0.000192
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000043 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.001566 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000039 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.001159 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fe10d000/0x0/0x4ffc00000, data 0x5bdac/0xc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=0 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=0 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000012
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000159 1 0.000030
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000627 2 0.000033
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 67 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:40.806393+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.819200 2 0.001043
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.820798 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.821121 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000057 1 0.000083
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.339253 2 0.000044
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.340083 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.819583 2 0.001105
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.820793 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.820937 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000162 1 0.000221
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000122 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=67/43 les/c/f=68/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001843 3 0.000109
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=67/43 les/c/f=68/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=67/43 les/c/f=68/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=67/68 n=1 ec=43/21 lis/c=67/43 les/c/f=68/44/0 sis=67) [2] r=0 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 1826816 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:41.806488+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 1785856 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _renew_subs
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.674800 5 0.000209
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.675883 5 0.000043
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: not registered w/ OSD
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 lc 39'37 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001643 4 0.000127
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 lc 39'37 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 lc 39'37 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000061 1 0.000074
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 lc 39'37 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: not registered w/ OSD
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035375 1 0.000028
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 lc 40'62 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.037209 4 0.000098
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 lc 40'62 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 lc 40'62 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000076 1 0.000041
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 lc 40'62 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052698 1 0.000056
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:42.806592+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:12.619357+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.e scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:12.636781+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.e scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.296500 1 0.000019
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.333653 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.009584 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.243603 1 0.000041
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000049 1 0.000075
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.333752 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.008737 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[45,68)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000077 1 0.000213
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000067 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000148
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000028 1 0.000240
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000772 3 0.000028
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000754 3 0.000191
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000044 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 1605632 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 73) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:12.619357+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.e scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:12.636781+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.e scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:43.806710+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 70 handle_osd_map epochs [70,71], i have 71, src has [1,71]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995667 2 0.000042
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995827 2 0.000128
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996738 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996792 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=70/71 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=70/71 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=70/71 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=70/71 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=70/71 n=6 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001887 3 0.000195
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=70/71 n=6 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=70/71 n=6 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=70/71 n=6 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=70/71 n=5 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002102 3 0.000447
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=70/71 n=5 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=70/71 n=5 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=70/71 n=5 ec=45/34 lis/c=70/45 les/c/f=71/46/0 sis=70) [2] r=0 lpr=70 pi=[45,70)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 1400832 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 571771 data_alloc: 218103808 data_used: 53248
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 71 handle_osd_map epochs [71,71], i have 71, src has [1,71]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 71 handle_osd_map epochs [71,71], i have 71, src has [1,71]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:44.806821+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 1392640 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fe0ff000/0x0/0x4ffc00000, data 0x62b09/0xce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:45.806920+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 1392640 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:46.807051+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:16.681727+0000 osd.2 (osd.2) 74 : cluster [DBG] 4.11 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:16.695799+0000 osd.2 (osd.2) 75 : cluster [DBG] 4.11 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 1392640 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 75) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:16.681727+0000 osd.2 (osd.2) 74 : cluster [DBG] 4.11 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:16.695799+0000 osd.2 (osd.2) 75 : cluster [DBG] 4.11 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:47.807207+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 1384448 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fe100000/0x0/0x4ffc00000, data 0x62b09/0xce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:48.807304+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 1507328 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 572039 data_alloc: 218103808 data_used: 53248
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:49.807438+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fe100000/0x0/0x4ffc00000, data 0x62b09/0xce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 1499136 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:50.807540+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 1499136 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fe100000/0x0/0x4ffc00000, data 0x62b09/0xce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.437329292s of 11.483425140s, submitted: 49
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 71 handle_osd_map epochs [72,72], i have 72, src has [1,72]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:51.807632+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:21.746825+0000 osd.2 (osd.2) 76 : cluster [DBG] 4.18 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:21.760697+0000 osd.2 (osd.2) 77 : cluster [DBG] 4.18 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 1449984 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 77) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:21.746825+0000 osd.2 (osd.2) 76 : cluster [DBG] 4.18 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:21.760697+0000 osd.2 (osd.2) 77 : cluster [DBG] 4.18 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:52.807749+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 1441792 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:53.807849+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 1425408 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 581003 data_alloc: 218103808 data_used: 61440
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:54.807949+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 1417216 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fe0f8000/0x0/0x4ffc00000, data 0x66254/0xd4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:55.808057+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:25.730531+0000 osd.2 (osd.2) 78 : cluster [DBG] 4.1a deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:25.744450+0000 osd.2 (osd.2) 79 : cluster [DBG] 4.1a deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 1417216 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 79) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:25.730531+0000 osd.2 (osd.2) 78 : cluster [DBG] 4.1a deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:25.744450+0000 osd.2 (osd.2) 79 : cluster [DBG] 4.1a deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.13 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 4.13 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:56.808177+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:26.764853+0000 osd.2 (osd.2) 80 : cluster [DBG] 4.13 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:26.778616+0000 osd.2 (osd.2) 81 : cluster [DBG] 4.13 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 1409024 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 81) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:26.764853+0000 osd.2 (osd.2) 80 : cluster [DBG] 4.13 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:26.778616+0000 osd.2 (osd.2) 81 : cluster [DBG] 4.13 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fe0fa000/0x0/0x4ffc00000, data 0x66254/0xd4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:57.808306+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 1409024 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:58.808443+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 1409024 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 585919 data_alloc: 218103808 data_used: 69632
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:59.808531+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 1 last_log 82 sent 81 num 1 unsent 1 sending 1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:29.804359+0000 osd.2 (osd.2) 82 : cluster [DBG] 10.3 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 1392640 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 82) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:29.804359+0000 osd.2 (osd.2) 82 : cluster [DBG] 10.3 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=0 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000053 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=0 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000021
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000210 1 0.000042
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000037 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000272 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=0 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=0 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000012
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000087 1 0.000028
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000123 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:00.808685+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 1 last_log 83 sent 82 num 1 unsent 1 sending 1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:29.822045+0000 osd.2 (osd.2) 83 : cluster [DBG] 10.3 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.794003 2 0.000046
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.794153 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.794174 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000082 1 0.000115
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.794479 2 0.000070
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.794767 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.794790 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=0 lpr=75 pi=[45,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000049 1 0.000070
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 76 handle_osd_map epochs [76,76], i have 76, src has [1,76]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 1343488 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 83) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:29.822045+0000 osd.2 (osd.2) 83 : cluster [DBG] 10.3 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:01.808811+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 1318912 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 76 heartbeat osd_stat(store_statfs(0x4fe0ee000/0x0/0x4ffc00000, data 0x6b723/0xdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.473957062s of 10.500686646s, submitted: 23
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.158959 5 0.000050
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.158752 5 0.000042
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 lc 40'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001838 4 0.000103
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 lc 40'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 lc 40'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000037 1 0.000026
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 lc 40'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049742 1 0.000067
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 lc 40'79 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.051615 4 0.000167
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 lc 40'79 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 lc 40'79 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000039 1 0.000018
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 lc 40'79 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038766 1 0.000025
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:02.808919+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 1212416 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.002783 1 0.000025
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.054478 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.213465 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000060 1 0.000088
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.964109 1 0.000026
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.054597 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.213405 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[45,76)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000033 1 0.000049
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001773 2 0.000023
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002442 2 0.000045
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000908 2 0.000043
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000326 2 0.000027
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 78 heartbeat osd_stat(store_statfs(0x4fe0ec000/0x0/0x4ffc00000, data 0x6d4e3/0xe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:03.809022+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 1212416 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 623120 data_alloc: 218103808 data_used: 69632
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003170 2 0.000583
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006541 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005375 2 0.000075
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008220 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001672 3 0.000405
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=6 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001253 3 0.000237
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=6 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=6 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=6 ec=45/34 lis/c=78/45 les/c/f=79/46/0 sis=78) [2] r=0 lpr=78 pi=[45,78)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:04.809117+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x70973/0xe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 1122304 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:05.809210+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 1089536 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:06.809324+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:36.775824+0000 osd.2 (osd.2) 84 : cluster [DBG] 10.5 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:36.789939+0000 osd.2 (osd.2) 85 : cluster [DBG] 10.5 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1073152 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 85) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:36.775824+0000 osd.2 (osd.2) 84 : cluster [DBG] 10.5 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:36.789939+0000 osd.2 (osd.2) 85 : cluster [DBG] 10.5 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x70973/0xe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:07.809482+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1073152 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:08.809624+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1056768 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 626872 data_alloc: 218103808 data_used: 81920
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x70973/0xe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:09.809770+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1048576 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:10.809910+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1048576 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x70973/0xe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:11.810017+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:40.830097+0000 osd.2 (osd.2) 86 : cluster [DBG] 10.a scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:40.844208+0000 osd.2 (osd.2) 87 : cluster [DBG] 10.a scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 1015808 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 87) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:40.830097+0000 osd.2 (osd.2) 86 : cluster [DBG] 10.a scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:40.844208+0000 osd.2 (osd.2) 87 : cluster [DBG] 10.a scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.050789833s of 10.089987755s, submitted: 36
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=0 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=0 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000022
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000107 1 0.000038
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.001156 2 0.000039
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 81 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:12.810199+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 966656 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 81 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004278 2 0.000075
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.005606 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.001474 4 0.000118
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000055 1 0.000048
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.126213 2 0.000028
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=81/82 n=1 ec=43/21 lis/c=81/54 les/c/f=82/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:13.810333+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 942080 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 643794 data_alloc: 218103808 data_used: 94208
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 83 heartbeat osd_stat(store_statfs(0x4fe0db000/0x0/0x4ffc00000, data 0x76032/0xf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:14.810481+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 917504 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:15.810611+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 917504 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:16.810742+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:45.944729+0000 osd.2 (osd.2) 88 : cluster [DBG] 10.c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:45.958850+0000 osd.2 (osd.2) 89 : cluster [DBG] 10.c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 909312 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 89) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:45.944729+0000 osd.2 (osd.2) 88 : cluster [DBG] 10.c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:45.958850+0000 osd.2 (osd.2) 89 : cluster [DBG] 10.c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:17.810932+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 909312 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:18.811059+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:47.958179+0000 osd.2 (osd.2) 90 : cluster [DBG] 10.18 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:47.972312+0000 osd.2 (osd.2) 91 : cluster [DBG] 10.18 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 892928 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 656476 data_alloc: 218103808 data_used: 94208
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 91) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:47.958179+0000 osd.2 (osd.2) 90 : cluster [DBG] 10.18 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:47.972312+0000 osd.2 (osd.2) 91 : cluster [DBG] 10.18 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 85 heartbeat osd_stat(store_statfs(0x4fe0d1000/0x0/0x4ffc00000, data 0x7b2a9/0xfb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:19.811218+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:48.959486+0000 osd.2 (osd.2) 92 : cluster [DBG] 10.1b scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:48.973582+0000 osd.2 (osd.2) 93 : cluster [DBG] 10.1b scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 909312 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 93) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:48.959486+0000 osd.2 (osd.2) 92 : cluster [DBG] 10.1b scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:48.973582+0000 osd.2 (osd.2) 93 : cluster [DBG] 10.1b scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:20.811349+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=0 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=0 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000048
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000063 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000108 1 0.000159
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000048 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000224 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 892928 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 86 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.268419 2 0.000141
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.268704 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.268804 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=0 lpr=86 pi=[53,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000081 1 0.000120
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:21.811480+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 884736 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.012294769s of 10.046145439s, submitted: 36
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.005988 6 0.000032
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 lc 40'116 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002557 3 0.000119
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 lc 40'116 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 lc 40'116 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000059 1 0.000071
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 lc 40'116 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035601 1 0.000056
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:22.811574+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 1826816 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.971497 1 0.000023
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.009805 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.015831 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[53,87)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000282 1 0.000346
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000097 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000195
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001813 3 0.000058
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:23.811688+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 89 heartbeat osd_stat(store_statfs(0x4fe0c4000/0x0/0x4ffc00000, data 0x81d8d/0x108000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 1818624 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 677611 data_alloc: 218103808 data_used: 94208
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996627 2 0.000053
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998565 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000934 4 0.000132
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=89/90 n=5 ec=45/34 lis/c=89/53 les/c/f=90/54/0 sis=89) [2] r=0 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:24.811808+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 1810432 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fe0c0000/0x0/0x4ffc00000, data 0x837c0/0x10b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:25.811900+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 1802240 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:26.811994+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 1802240 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:27.812147+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:57.013086+0000 osd.2 (osd.2) 94 : cluster [DBG] 10.1c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:40:57.027202+0000 osd.2 (osd.2) 95 : cluster [DBG] 10.1c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fe0c3000/0x0/0x4ffc00000, data 0x837c0/0x10b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 1794048 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 95) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:57.013086+0000 osd.2 (osd.2) 94 : cluster [DBG] 10.1c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:40:57.027202+0000 osd.2 (osd.2) 95 : cluster [DBG] 10.1c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:28.812306+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 1777664 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 680004 data_alloc: 218103808 data_used: 94208
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:29.812430+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 1769472 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:30.812545+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 91 heartbeat osd_stat(store_statfs(0x4fcf1f000/0x0/0x4ffc00000, data 0x8533d/0x10e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 1720320 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:31.812663+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 1712128 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.018945694s of 10.037871361s, submitted: 24
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:32.812792+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 1679360 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:33.812921+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 1671168 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 687308 data_alloc: 218103808 data_used: 106496
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 60.196275 96 0.000210
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 60.199326 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 61.200964 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] exit Started 61.200993 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.804219246s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 active pruub 172.308624268s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.803787231s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 172.308624268s@ mbc={}] exit Reset 0.000468 1 0.000544
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.803787231s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 172.308624268s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.803787231s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 172.308624268s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.803787231s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 172.308624268s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.803787231s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 172.308624268s@ mbc={}] exit Start 0.000087 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 93 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93 pruub=11.803787231s) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 172.308624268s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 93 handle_osd_map epochs [92,93], i have 93, src has [1,93]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:34.813013+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 1638400 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.011377 3 0.000205
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.011521 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=-1 lpr=93 pi=[62,93)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000069 1 0.000101
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000031
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000038 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 94 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fcf19000/0x0/0x4ffc00000, data 0x88a37/0x114000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:35.813107+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:05.056372+0000 osd.2 (osd.2) 96 : cluster [DBG] 10.1d scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:05.070499+0000 osd.2 (osd.2) 97 : cluster [DBG] 10.1d scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 1646592 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 94 handle_osd_map epochs [94,95], i have 95, src has [1,95]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003318 4 0.000071
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003426 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 97) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:05.056372+0000 osd.2 (osd.2) 96 : cluster [DBG] 10.1d scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:05.070499+0000 osd.2 (osd.2) 97 : cluster [DBG] 10.1d scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.002309 5 0.000590
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000099 1 0.000042
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000961 1 0.000024
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028382 2 0.000039
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:36.813225+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 1581056 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.974208 1 0.000057
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.006516 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.009961 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.009983 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] async=[0] r=0 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996132851s) [0] async=[0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 44'389 active pruub 178.522598267s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996060371s) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.522598267s@ mbc={}] exit Reset 0.000103 1 0.000153
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996060371s) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.522598267s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996060371s) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.522598267s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996060371s) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.522598267s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996060371s) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.522598267s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96 pruub=14.996060371s) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.522598267s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:37.813341+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:07.024562+0000 osd.2 (osd.2) 98 : cluster [DBG] 10.1f scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:07.042233+0000 osd.2 (osd.2) 99 : cluster [DBG] 10.1f scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 1564672 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 99) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:07.024562+0000 osd.2 (osd.2) 98 : cluster [DBG] 10.1f scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:07.042233+0000 osd.2 (osd.2) 99 : cluster [DBG] 10.1f scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015636 7 0.000115
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000066 1 0.000101
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] lb MIN local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 DELETING pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030914 2 0.000163
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] lb MIN local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.031029 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] lb MIN local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=-1 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.046737 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:38.813460+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:08.043174+0000 osd.2 (osd.2) 100 : cluster [DBG] 7.1a scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:08.057281+0000 osd.2 (osd.2) 101 : cluster [DBG] 7.1a scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 1564672 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 698729 data_alloc: 218103808 data_used: 106496
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 101) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:08.043174+0000 osd.2 (osd.2) 100 : cluster [DBG] 7.1a scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:08.057281+0000 osd.2 (osd.2) 101 : cluster [DBG] 7.1a scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:39.813574+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 1564672 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:40.813668+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fcf0d000/0x0/0x4ffc00000, data 0x8f4c7/0x11f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 1556480 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:41.813783+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.c deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 1556480 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.029078484s of 10.060076714s, submitted: 62
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19(unlocked)] enter Initial
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=0 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000042 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=0 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000023
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000120 1 0.000040
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000027 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000193 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:42.813871+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:11.970267+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.c deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:11.984399+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.c deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 1548288 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 99 handle_osd_map epochs [99,100], i have 100, src has [1,100]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.004986 2 0.000104
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.005213 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.005233 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[54,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000048 1 0.000073
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 103) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:11.970267+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.c deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:11.984399+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.c deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:43.813983+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 1540096 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 709476 data_alloc: 218103808 data_used: 106496
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.002803 6 0.000028
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 lc 40'64 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001994 3 0.000112
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 lc 40'64 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 lc 40'64 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000075 1 0.000031
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 lc 40'64 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049736 1 0.000026
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:44.814110+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fcaf0000/0x0/0x4ffc00000, data 0x96232/0x12c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 1474560 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.964505 1 0.000053
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.016393 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.019219 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[54,100)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000049 1 0.000075
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000022 1 0.000027
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 12:58:05 compute-0 ceph-osd[90297]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000704 3 0.000055
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:45.814202+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:14.959563+0000 osd.2 (osd.2) 104 : cluster [DBG] 8.15 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:14.973216+0000 osd.2 (osd.2) 105 : cluster [DBG] 8.15 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 1441792 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 105) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:14.959563+0000 osd.2 (osd.2) 104 : cluster [DBG] 8.15 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:14.973216+0000 osd.2 (osd.2) 105 : cluster [DBG] 8.15 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005720 2 0.000099
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006536 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=102/103 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=102/103 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=102/103 n=5 ec=45/34 lis/c=102/54 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002081 3 0.000257
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=102/103 n=5 ec=45/34 lis/c=102/54 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=102/103 n=5 ec=45/34 lis/c=102/54 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=102/103 n=5 ec=45/34 lis/c=102/54 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fcaec000/0x0/0x4ffc00000, data 0x97c7a/0x12f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:46.814312+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 1417216 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:47.814476+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 1417216 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fcaeb000/0x0/0x4ffc00000, data 0x997ff/0x132000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=78) [2] r=0 lpr=78 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 44.339956 79 0.000297
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=78) [2] r=0 lpr=78 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 44.341829 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=78) [2] r=0 lpr=78 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 45.348401 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=78) [2] r=0 lpr=78 crt=44'389 mlcod 0'0 active mbc={}] exit Started 45.348423 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=78) [2] r=0 lpr=78 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659957886s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 active pruub 186.385116577s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659915924s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.385116577s@ mbc={}] exit Reset 0.000197 1 0.000274
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659915924s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.385116577s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659915924s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.385116577s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659915924s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.385116577s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659915924s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.385116577s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 104 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104 pruub=11.659915924s) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.385116577s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:48.814568+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fcae7000/0x0/0x4ffc00000, data 0x9b37c/0x135000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 1400832 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 735086 data_alloc: 218103808 data_used: 106496
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.878089 3 0.000070
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.878125 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=-1 lpr=104 pi=[78,104)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000104 1 0.000134
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 1 0.000032
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000026 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 105 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:49.814666+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:18.992389+0000 osd.2 (osd.2) 106 : cluster [DBG] 8.2 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:19.006264+0000 osd.2 (osd.2) 107 : cluster [DBG] 8.2 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 1351680 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 107) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:18.992389+0000 osd.2 (osd.2) 106 : cluster [DBG] 8.2 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:19.006264+0000 osd.2 (osd.2) 107 : cluster [DBG] 8.2 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 105 handle_osd_map epochs [105,106], i have 106, src has [1,106]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001786 4 0.000082
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002070 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=78/79 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.001932 5 0.001016
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000134 1 0.000052
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000246 1 0.000077
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.049519 2 0.000039
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:50.814802+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 106 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.511452 1 0.000083
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 0.563574 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 1.565773 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 1.565788 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] async=[0] r=0 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438099861s) [0] async=[0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 44'389 active pruub 192.607360840s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438027382s) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 192.607360840s@ mbc={}] exit Reset 0.000090 1 0.000125
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438027382s) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 192.607360840s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438027382s) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 192.607360840s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438027382s) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 192.607360840s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438027382s) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 192.607360840s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107 pruub=15.438027382s) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 192.607360840s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 262144 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:51.814911+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.003164 7 0.000106
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000063 1 0.000061
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] lb MIN local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 DELETING pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.052981 2 0.000192
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] lb MIN local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.053092 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] lb MIN local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=-1 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.056309 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 262144 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:52.815006+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:22.015129+0000 osd.2 (osd.2) 108 : cluster [DBG] 11.d scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:22.029383+0000 osd.2 (osd.2) 109 : cluster [DBG] 11.d scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 109) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:22.015129+0000 osd.2 (osd.2) 108 : cluster [DBG] 11.d scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:22.029383+0000 osd.2 (osd.2) 109 : cluster [DBG] 11.d scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.656579971s of 10.708313942s, submitted: 55
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 196608 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fcada000/0x0/0x4ffc00000, data 0xa1d8c/0x140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:53.815127+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:23.044912+0000 osd.2 (osd.2) 110 : cluster [DBG] 7.2 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:23.059139+0000 osd.2 (osd.2) 111 : cluster [DBG] 7.2 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 111) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:23.044912+0000 osd.2 (osd.2) 110 : cluster [DBG] 7.2 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:23.059139+0000 osd.2 (osd.2) 111 : cluster [DBG] 7.2 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 172032 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 735624 data_alloc: 218103808 data_used: 106496
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:54.815253+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 172032 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:55.815349+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 163840 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fcade000/0x0/0x4ffc00000, data 0xa1d8c/0x140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:56.815442+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 163840 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:57.815560+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 163840 heap: 66084864 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 84.378048 147 0.001390
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 84.381458 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 85.383087 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] exit Started 85.383116 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [2] r=0 lpr=62 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.622039795s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 active pruub 196.307983398s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.621926308s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 196.307983398s@ mbc={}] exit Reset 0.000167 1 0.000246
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.621926308s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 196.307983398s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.621926308s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 196.307983398s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.621926308s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 196.307983398s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.621926308s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 196.307983398s@ mbc={}] exit Start 0.000046 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 109 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=11.621926308s) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 196.307983398s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:58.815647+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.516129 3 0.000280
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.516319 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=-1 lpr=109 pi=[62,109)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000161 1 0.000222
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000125 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000368
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000046 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 110 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1196032 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 743444 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:59.815742+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996549 4 0.000225
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 83.039293 144 0.000253
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 83.041055 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 84.043435 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=44'389 mlcod 0'0 active mbc={}] exit Started 84.043463 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.996895 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=62/63 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962300301s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 active pruub 199.162155151s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962102890s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 199.162155151s@ mbc={}] exit Reset 0.000246 1 0.000326
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962102890s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 199.162155151s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962102890s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 199.162155151s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962102890s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 199.162155151s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962102890s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 199.162155151s@ mbc={}] exit Start 0.000044 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=12.962102890s) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 199.162155151s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 111 handle_osd_map epochs [110,111], i have 111, src has [1,111]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 111 handle_osd_map epochs [110,111], i have 111, src has [1,111]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fcad2000/0x0/0x4ffc00000, data 0xa6f0c/0x149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.127425 5 0.000459
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000053 1 0.000040
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000257 1 0.000022
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035393 2 0.000033
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1163264 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:00.815790+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:30.039299+0000 osd.2 (osd.2) 112 : cluster [DBG] 11.2 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:30.053464+0000 osd.2 (osd.2) 113 : cluster [DBG] 11.2 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 111 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.969091 3 0.000176
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.969182 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=-1 lpr=111 pi=[65,111)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000051 1 0.000070
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 1 0.000027
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.807058 1 0.000049
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 0.970551 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 1.967494 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 1.967755 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156332970s) [0] async=[0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 44'389 active pruub 202.327163696s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156107903s) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 202.327163696s@ mbc={}] exit Reset 0.000660 1 0.000865
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156107903s) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 202.327163696s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156107903s) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 202.327163696s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156107903s) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 202.327163696s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156107903s) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 202.327163696s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=15.156107903s) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 202.327163696s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 112 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 113) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:30.039299+0000 osd.2 (osd.2) 112 : cluster [DBG] 11.2 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:30.053464+0000 osd.2 (osd.2) 113 : cluster [DBG] 11.2 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1155072 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 112 heartbeat osd_stat(store_statfs(0x4fcace000/0x0/0x4ffc00000, data 0xa898e/0x14c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:01.815901+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:31.027363+0000 osd.2 (osd.2) 114 : cluster [DBG] 7.e scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:31.041503+0000 osd.2 (osd.2) 115 : cluster [DBG] 7.e scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 112 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001375 4 0.000063
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001469 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=65/66 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.003923 7 0.000216
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000045 1 0.000047
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 115) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:31.027363+0000 osd.2 (osd.2) 114 : cluster [DBG] 7.e scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:31.041503+0000 osd.2 (osd.2) 115 : cluster [DBG] 7.e scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] lb MIN local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 DELETING pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038346 2 0.000156
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] lb MIN local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038435 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] lb MIN local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=-1 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.042395 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.b deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.b deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1146880 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.519079 5 0.000168
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000059 1 0.000032
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000615 1 0.000015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035629 2 0.000030
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:02.816032+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:32.049207+0000 osd.2 (osd.2) 116 : cluster [DBG] 11.b deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:32.063357+0000 osd.2 (osd.2) 117 : cluster [DBG] 11.b deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 117) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:32.049207+0000 osd.2 (osd.2) 116 : cluster [DBG] 11.b deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:32.063357+0000 osd.2 (osd.2) 117 : cluster [DBG] 11.b deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.492867 1 0.000057
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.048423 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.049922 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.049988 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.470242500s) [1] async=[1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 44'389 active pruub 204.689910889s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.469997406s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 204.689910889s@ mbc={}] exit Reset 0.000564 1 0.000695
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.469997406s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 204.689910889s@ mbc={}] enter Started
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.469997406s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 204.689910889s@ mbc={}] enter Start
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.469997406s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 204.689910889s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.469997406s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 204.689910889s@ mbc={}] exit Start 0.000156 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.469997406s) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 204.689910889s@ mbc={}] enter Started/Stray
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 1130496 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:03.816168+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.922721863s of 10.958656311s, submitted: 45
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006232 7 0.000324
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000047 1 0.000039
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 DELETING pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038257 2 0.000131
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038339 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=-1 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.044784 0 0.000000
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcac7000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1114112 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 744418 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:04.816283+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1114112 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:05.816379+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1097728 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:06.816482+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcac7000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1064960 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:07.816604+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1064960 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:08.816694+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1056768 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 744418 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:09.816788+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1056768 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:10.816887+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1048576 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcac7000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:11.816976+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:41.155125+0000 osd.2 (osd.2) 118 : cluster [DBG] 8.d scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:41.169261+0000 osd.2 (osd.2) 119 : cluster [DBG] 8.d scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 119) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:41.155125+0000 osd.2 (osd.2) 118 : cluster [DBG] 8.d scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:41.169261+0000 osd.2 (osd.2) 119 : cluster [DBG] 8.d scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 1040384 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:12.817088+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:42.120212+0000 osd.2 (osd.2) 120 : cluster [DBG] 7.1 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:42.134356+0000 osd.2 (osd.2) 121 : cluster [DBG] 7.1 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 121) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:42.120212+0000 osd.2 (osd.2) 120 : cluster [DBG] 7.1 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:42.134356+0000 osd.2 (osd.2) 121 : cluster [DBG] 7.1 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 1032192 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:13.817216+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 1007616 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 744488 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:14.817369+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 1007616 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:15.817466+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.8 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.148889542s of 12.159530640s, submitted: 9
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.8 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 999424 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:16.817560+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:46.163083+0000 osd.2 (osd.2) 122 : cluster [DBG] 11.8 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:46.177210+0000 osd.2 (osd.2) 123 : cluster [DBG] 11.8 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 123) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:46.163083+0000 osd.2 (osd.2) 122 : cluster [DBG] 11.8 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:46.177210+0000 osd.2 (osd.2) 123 : cluster [DBG] 11.8 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 999424 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:17.817694+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:47.193215+0000 osd.2 (osd.2) 124 : cluster [DBG] 8.4 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:47.207387+0000 osd.2 (osd.2) 125 : cluster [DBG] 8.4 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 125) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:47.193215+0000 osd.2 (osd.2) 124 : cluster [DBG] 8.4 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:47.207387+0000 osd.2 (osd.2) 125 : cluster [DBG] 8.4 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 991232 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:18.817831+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:48.144393+0000 osd.2 (osd.2) 126 : cluster [DBG] 7.a scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:48.158514+0000 osd.2 (osd.2) 127 : cluster [DBG] 7.a scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 127) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:48.144393+0000 osd.2 (osd.2) 126 : cluster [DBG] 7.a scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:48.158514+0000 osd.2 (osd.2) 127 : cluster [DBG] 7.a scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 991232 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 747930 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:19.817955+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 983040 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:20.818063+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 950272 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:21.818196+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:51.259967+0000 osd.2 (osd.2) 128 : cluster [DBG] 7.8 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:51.274087+0000 osd.2 (osd.2) 129 : cluster [DBG] 7.8 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 129) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:51.259967+0000 osd.2 (osd.2) 128 : cluster [DBG] 7.8 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:51.274087+0000 osd.2 (osd.2) 129 : cluster [DBG] 7.8 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 950272 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:22.818369+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 950272 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:23.818502+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 925696 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 749077 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:24.818653+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:54.297511+0000 osd.2 (osd.2) 130 : cluster [DBG] 11.18 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:54.311632+0000 osd.2 (osd.2) 131 : cluster [DBG] 11.18 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 131) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:54.297511+0000 osd.2 (osd.2) 130 : cluster [DBG] 11.18 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:54.311632+0000 osd.2 (osd.2) 131 : cluster [DBG] 11.18 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 925696 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:25.818839+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 917504 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.113793373s of 10.123807907s, submitted: 10
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:26.818962+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:56.286937+0000 osd.2 (osd.2) 132 : cluster [DBG] 8.1b scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:56.301057+0000 osd.2 (osd.2) 133 : cluster [DBG] 8.1b scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 133) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:56.286937+0000 osd.2 (osd.2) 132 : cluster [DBG] 8.1b scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:56.301057+0000 osd.2 (osd.2) 133 : cluster [DBG] 8.1b scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 917504 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:27.819114+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 917504 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:28.819235+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 917504 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 752523 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:29.819399+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:59.185233+0000 osd.2 (osd.2) 134 : cluster [DBG] 11.1b scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:41:59.199434+0000 osd.2 (osd.2) 135 : cluster [DBG] 11.1b scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 135) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:59.185233+0000 osd.2 (osd.2) 134 : cluster [DBG] 11.1b scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:41:59.199434+0000 osd.2 (osd.2) 135 : cluster [DBG] 11.1b scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 917504 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:30.819594+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 909312 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:31.819742+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:01.219525+0000 osd.2 (osd.2) 136 : cluster [DBG] 11.1c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:01.233623+0000 osd.2 (osd.2) 137 : cluster [DBG] 11.1c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 137) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:01.219525+0000 osd.2 (osd.2) 136 : cluster [DBG] 11.1c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:01.233623+0000 osd.2 (osd.2) 137 : cluster [DBG] 11.1c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 901120 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:32.819979+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 892928 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:33.820141+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 868352 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 753672 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:34.820289+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 860160 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:35.820384+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:05.214021+0000 osd.2 (osd.2) 138 : cluster [DBG] 11.1e scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:05.228221+0000 osd.2 (osd.2) 139 : cluster [DBG] 11.1e scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 139) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:05.214021+0000 osd.2 (osd.2) 138 : cluster [DBG] 11.1e scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:05.228221+0000 osd.2 (osd.2) 139 : cluster [DBG] 11.1e scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 860160 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:36.820526+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 860160 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:37.820660+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.883275986s of 11.894712448s, submitted: 8
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 860160 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:38.820810+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:08.181655+0000 osd.2 (osd.2) 140 : cluster [DBG] 11.11 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:08.195787+0000 osd.2 (osd.2) 141 : cluster [DBG] 11.11 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 141) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:08.181655+0000 osd.2 (osd.2) 140 : cluster [DBG] 11.11 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:08.195787+0000 osd.2 (osd.2) 141 : cluster [DBG] 11.11 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 843776 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 755970 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:39.821022+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1c deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.1c deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 843776 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:40.821144+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:10.168611+0000 osd.2 (osd.2) 142 : cluster [DBG] 8.1c deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:10.182688+0000 osd.2 (osd.2) 143 : cluster [DBG] 8.1c deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 143) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:10.168611+0000 osd.2 (osd.2) 142 : cluster [DBG] 8.1c deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:10.182688+0000 osd.2 (osd.2) 143 : cluster [DBG] 8.1c deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 835584 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:41.821288+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 835584 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:42.821429+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 827392 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:43.821542+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:13.176398+0000 osd.2 (osd.2) 144 : cluster [DBG] 8.12 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:13.190413+0000 osd.2 (osd.2) 145 : cluster [DBG] 8.12 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 145) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:13.176398+0000 osd.2 (osd.2) 144 : cluster [DBG] 8.12 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:13.190413+0000 osd.2 (osd.2) 145 : cluster [DBG] 8.12 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 827392 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 759415 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:44.821733+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:14.138340+0000 osd.2 (osd.2) 146 : cluster [DBG] 11.12 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:14.152387+0000 osd.2 (osd.2) 147 : cluster [DBG] 11.12 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 147) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:14.138340+0000 osd.2 (osd.2) 146 : cluster [DBG] 11.12 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:14.152387+0000 osd.2 (osd.2) 147 : cluster [DBG] 11.12 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 827392 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:45.822134+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 819200 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:46.822252+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 819200 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:47.822420+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.992584229s of 10.003332138s, submitted: 9
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 811008 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:48.822536+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:18.174401+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.3 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:18.188550+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.3 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 149) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:18.174401+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.3 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:18.188550+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.3 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 802816 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 761711 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:49.822680+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:19.221140+0000 osd.2 (osd.2) 150 : cluster [DBG] 8.11 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:19.235287+0000 osd.2 (osd.2) 151 : cluster [DBG] 8.11 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 151) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:19.221140+0000 osd.2 (osd.2) 150 : cluster [DBG] 8.11 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:19.235287+0000 osd.2 (osd.2) 151 : cluster [DBG] 8.11 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.5 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.5 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 802816 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:50.822789+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:20.244497+0000 osd.2 (osd.2) 152 : cluster [DBG] 7.5 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:20.258513+0000 osd.2 (osd.2) 153 : cluster [DBG] 7.5 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 153) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:20.244497+0000 osd.2 (osd.2) 152 : cluster [DBG] 7.5 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:20.258513+0000 osd.2 (osd.2) 153 : cluster [DBG] 7.5 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 802816 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:51.822904+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 794624 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:52.822997+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:22.271871+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.9 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:22.289560+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.9 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 155) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:22.271871+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.9 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:22.289560+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.9 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 786432 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:53.823130+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 761856 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 764006 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:54.823237+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 761856 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:55.823336+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 753664 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:56.823439+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 753664 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:57.823563+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.025296211s of 10.035103798s, submitted: 7
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 753664 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:58.823665+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:28.220118+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.1f scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:28.234262+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.1f scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 157) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:28.220118+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.1f scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:28.234262+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.1f scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.15 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.15 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 720896 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766304 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:59.823801+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:29.189701+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.15 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:29.203846+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.15 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 159) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:29.189701+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.15 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:29.203846+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.15 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 720896 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:00.823968+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 704512 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:01.824095+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:31.222514+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1a scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:31.236408+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1a scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 161) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:31.222514+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1a scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:31.236408+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1a scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 704512 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:02.824204+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 704512 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:03.824322+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 688128 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768601 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:04.824452+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:34.241828+0000 osd.2 (osd.2) 162 : cluster [DBG] 7.11 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:34.255946+0000 osd.2 (osd.2) 163 : cluster [DBG] 7.11 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 163) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:34.241828+0000 osd.2 (osd.2) 162 : cluster [DBG] 7.11 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:34.255946+0000 osd.2 (osd.2) 163 : cluster [DBG] 7.11 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 688128 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:05.824630+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 688128 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:06.824785+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 679936 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:07.824929+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:37.196674+0000 osd.2 (osd.2) 164 : cluster [DBG] 7.15 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:37.210858+0000 osd.2 (osd.2) 165 : cluster [DBG] 7.15 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 165) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:37.196674+0000 osd.2 (osd.2) 164 : cluster [DBG] 7.15 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:37.210858+0000 osd.2 (osd.2) 165 : cluster [DBG] 7.15 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 679936 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:08.825064+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 679936 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 769749 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:09.825183+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 671744 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:10.825301+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.934693336s of 12.946996689s, submitted: 10
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 671744 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:11.825399+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:41.167082+0000 osd.2 (osd.2) 166 : cluster [DBG] 7.1c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:41.181249+0000 osd.2 (osd.2) 167 : cluster [DBG] 7.1c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.6 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.6 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 167) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:41.167082+0000 osd.2 (osd.2) 166 : cluster [DBG] 7.1c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:41.181249+0000 osd.2 (osd.2) 167 : cluster [DBG] 7.1c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 663552 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:12.825578+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:42.163738+0000 osd.2 (osd.2) 168 : cluster [DBG] 9.6 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:42.199037+0000 osd.2 (osd.2) 169 : cluster [DBG] 9.6 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 169) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:42.163738+0000 osd.2 (osd.2) 168 : cluster [DBG] 9.6 deep-scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:42.199037+0000 osd.2 (osd.2) 169 : cluster [DBG] 9.6 deep-scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 663552 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:13.825717+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 638976 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772044 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:14.825801+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 638976 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:15.825896+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 630784 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:16.825991+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 614400 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:17.826100+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 614400 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:18.826193+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:48.200118+0000 osd.2 (osd.2) 170 : cluster [DBG] 9.e scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:48.231924+0000 osd.2 (osd.2) 171 : cluster [DBG] 9.e scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 171) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:48.200118+0000 osd.2 (osd.2) 170 : cluster [DBG] 9.e scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:48.231924+0000 osd.2 (osd.2) 171 : cluster [DBG] 9.e scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 606208 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 774339 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:19.826330+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:49.212445+0000 osd.2 (osd.2) 172 : cluster [DBG] 9.17 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:49.237184+0000 osd.2 (osd.2) 173 : cluster [DBG] 9.17 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 173) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:49.212445+0000 osd.2 (osd.2) 172 : cluster [DBG] 9.17 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:49.237184+0000 osd.2 (osd.2) 173 : cluster [DBG] 9.17 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 606208 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:20.826481+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 606208 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:21.826591+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.027474403s of 11.042146683s, submitted: 8
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 598016 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:22.826687+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:52.209227+0000 osd.2 (osd.2) 174 : cluster [DBG] 9.f scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:52.248143+0000 osd.2 (osd.2) 175 : cluster [DBG] 9.f scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 175) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:52.209227+0000 osd.2 (osd.2) 174 : cluster [DBG] 9.f scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:52.248143+0000 osd.2 (osd.2) 175 : cluster [DBG] 9.f scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 598016 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:23.826800+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 581632 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 776633 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:24.826901+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:54.193857+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.7 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:54.225615+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.7 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 177) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:54.193857+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.7 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:54.225615+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.7 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 581632 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:25.827035+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:55.225003+0000 osd.2 (osd.2) 178 : cluster [DBG] 6.8 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:55.239140+0000 osd.2 (osd.2) 179 : cluster [DBG] 6.8 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 179) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:55.225003+0000 osd.2 (osd.2) 178 : cluster [DBG] 6.8 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:55.239140+0000 osd.2 (osd.2) 179 : cluster [DBG] 6.8 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 573440 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:26.827183+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 573440 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:27.827358+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 565248 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:28.827503+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 565248 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 778927 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:29.827626+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:59.231478+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.8 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:42:59.269844+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.8 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 181) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:59.231478+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.8 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:42:59.269844+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.8 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 557056 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:30.827821+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:43:00.213024+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.18 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:43:00.241254+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.18 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 183) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:43:00.213024+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.18 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:43:00.241254+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.18 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 548864 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:31.827968+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 548864 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:32.828050+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:43:02.160281+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:43:02.192069+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.975472450s of 10.992065430s, submitted: 12
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 185) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:43:02.160281+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.c scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:43:02.192069+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.c scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 540672 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:33.828217+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:43:03.201317+0000 osd.2 (osd.2) 186 : cluster [DBG] 6.f scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:43:03.225996+0000 osd.2 (osd.2) 187 : cluster [DBG] 6.f scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 187) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:43:03.201317+0000 osd.2 (osd.2) 186 : cluster [DBG] 6.f scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:43:03.225996+0000 osd.2 (osd.2) 187 : cluster [DBG] 6.f scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 524288 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782369 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:34.828385+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 524288 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:35.828513+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 516096 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:36.828631+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 516096 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:37.828807+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 516096 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:38.828930+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:43:08.292566+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.13 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:43:08.324506+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.13 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 189) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:43:08.292566+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.13 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:43:08.324506+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.13 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 499712 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783517 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:39.829160+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 499712 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:40.829433+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:43:10.251300+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.19 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  will send 2025-11-26T12:43:10.290153+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.19 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client handle_log_ack log(last 191) v1
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:43:10.251300+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.19 scrub starts
Nov 26 12:58:05 compute-0 ceph-osd[90297]: log_client  logged 2025-11-26T12:43:10.290153+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.19 scrub ok
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 491520 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:41.829694+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 491520 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:42.829852+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 483328 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:43.830003+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 491520 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:44.830131+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 491520 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:45.830279+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 483328 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:46.830451+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 483328 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:47.830623+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 475136 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:48.830768+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 475136 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:49.831568+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 475136 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:50.831730+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 466944 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:51.831888+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 466944 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:52.832036+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 466944 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:53.832176+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 442368 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:54.832341+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 442368 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:55.832512+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 442368 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:56.832641+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 434176 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:57.832823+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 434176 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:58.832954+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 434176 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:59.833095+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 417792 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:00.833218+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 417792 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:01.833370+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 409600 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:02.833570+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 409600 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:03.833702+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 409600 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:04.833828+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 401408 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:05.833961+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 401408 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:06.834120+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 401408 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:07.834295+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 393216 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:08.834404+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 393216 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:09.834516+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 385024 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:10.834613+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 385024 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:11.834714+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 385024 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:12.834817+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 376832 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:13.834950+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 368640 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:14.835077+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 368640 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:15.835215+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 360448 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:16.835307+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 360448 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:17.835455+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 352256 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:18.835586+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 352256 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:19.835683+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 352256 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:20.835793+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 344064 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:21.835900+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 344064 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:22.835991+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 344064 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:23.836083+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 335872 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:24.836246+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 335872 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:25.836369+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 327680 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:26.836462+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 327680 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:27.836582+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 327680 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:28.836677+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 319488 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:29.836793+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 319488 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:30.836895+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 311296 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:31.837601+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 311296 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:32.837726+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 311296 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:33.837806+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 303104 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:34.837900+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 303104 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:35.838001+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 294912 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:36.838107+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 294912 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:37.838248+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 294912 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:38.838419+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 286720 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:39.838573+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 286720 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:40.838720+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 286720 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:41.838867+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 278528 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:42.839001+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 278528 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:43.839128+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 278528 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:44.839244+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 278528 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:45.839359+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 278528 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:46.839490+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 278528 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:47.839647+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 270336 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:48.839795+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 270336 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:49.839922+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 262144 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:50.840049+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 262144 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:51.840235+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 262144 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:52.840344+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 253952 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:53.840468+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 253952 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:54.840611+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 253952 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:55.840745+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 245760 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:56.841065+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 245760 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:57.841233+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 237568 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:58.841371+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 237568 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:59.841473+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 237568 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:00.841579+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 229376 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:01.841739+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 229376 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:02.841932+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 221184 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:03.842092+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 221184 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:04.842262+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 221184 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:05.842426+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 204800 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:06.842574+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 204800 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:07.843388+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 196608 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:08.843547+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 204800 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:09.843689+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 204800 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:10.843815+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 196608 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:11.843952+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 196608 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:12.844074+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 196608 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:13.844208+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 188416 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:14.844350+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 188416 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:15.844461+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 188416 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:16.844574+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 180224 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:17.844703+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 180224 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:18.844816+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 172032 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:19.844918+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 172032 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:20.845044+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 172032 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:21.845151+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 163840 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:22.845269+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 163840 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:23.845403+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 155648 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:24.845542+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 155648 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:25.845698+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 155648 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:26.845829+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 147456 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:27.846042+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 139264 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:28.846202+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 131072 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:29.846360+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 131072 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:30.846525+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 131072 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:31.846616+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 122880 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:32.846745+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 122880 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:33.846897+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 122880 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:34.847049+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 114688 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:35.847990+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 114688 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:36.848110+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 106496 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:37.848273+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 98304 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:38.848402+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 98304 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:39.848513+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 90112 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:40.848640+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 90112 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:41.848808+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 90112 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:42.848944+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 73728 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:43.849070+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 65536 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:44.849236+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 57344 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:45.849407+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 57344 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:46.849586+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 57344 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:47.849875+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 49152 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:48.850014+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 49152 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:49.850147+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 49152 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:50.850248+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 40960 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:51.850365+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 40960 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:52.850499+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 32768 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:53.850601+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 32768 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:54.850705+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 24576 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:55.850789+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 24576 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:56.850880+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 24576 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:57.850995+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 16384 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:58.851085+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 24576 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:59.851170+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 24576 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:00.851270+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 16384 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:01.851378+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 16384 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:02.851485+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 16384 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:03.851583+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 8192 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:04.851692+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 8192 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:05.851806+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 0 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:06.851920+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 0 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:07.852047+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 0 heap: 67133440 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:08.852158+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1040384 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:09.852256+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1040384 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:10.852373+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 1032192 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:11.852472+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 1032192 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:12.852580+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 1032192 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:13.852735+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 1024000 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:14.852870+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 1024000 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:15.852974+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 1024000 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:16.853092+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 1015808 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:17.853206+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 1032192 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:18.853296+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 1032192 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:19.853402+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 1024000 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:20.853504+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 1024000 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:21.853600+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 1015808 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:22.853698+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 1015808 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:23.853807+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 1015808 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:24.853920+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 1007616 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:25.854021+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 1007616 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:26.854187+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 999424 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:27.854313+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 999424 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:28.854423+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 999424 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:29.854519+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 991232 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:30.854614+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 991232 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:31.854719+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 983040 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:32.854826+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 983040 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:33.854926+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 983040 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:34.855038+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 974848 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:35.855164+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 974848 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:36.855283+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 974848 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:37.855396+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 966656 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:38.855485+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 966656 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:39.855581+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 958464 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:40.855671+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 958464 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:41.855772+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 950272 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:42.855866+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 950272 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:43.856000+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 950272 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:44.856103+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 942080 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:45.856228+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 942080 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:46.856322+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 942080 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:47.856431+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 933888 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:48.856563+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 933888 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:49.856652+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 925696 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:50.856799+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 925696 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:51.856933+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 917504 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:52.857088+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 917504 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:53.857198+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 917504 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:54.857302+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 909312 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:55.857405+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:56.857509+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 909312 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:57.857634+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 901120 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:58.857789+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 901120 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:59.857888+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 901120 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:00.857984+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 892928 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:01.858103+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 892928 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:02.858202+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 892928 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:03.858294+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 884736 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:04.858385+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 884736 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:05.858506+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 884736 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:06.858598+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 868352 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:07.858700+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 860160 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:08.858782+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 860160 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:09.858869+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 860160 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:10.858969+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 851968 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:11.859107+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 851968 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:12.859231+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 851968 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:13.859329+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 843776 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:14.859424+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 843776 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:15.859529+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 835584 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:16.859657+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 835584 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:17.859805+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 835584 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:18.859941+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 827392 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:19.860043+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 827392 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:20.860166+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 827392 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:21.860264+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 819200 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:22.860410+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 819200 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:23.860564+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 811008 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:24.860695+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 811008 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:25.860828+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 811008 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:26.860949+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 802816 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:27.861108+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 802816 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:28.861228+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 802816 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:29.861337+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 794624 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:30.861437+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 794624 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:31.861538+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 786432 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:32.861644+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 786432 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:33.861744+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 778240 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:34.861870+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 770048 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:35.861982+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 770048 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:36.862084+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 770048 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:37.862205+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 761856 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:38.862309+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 745472 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:39.862416+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 737280 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:40.862525+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 737280 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:41.862626+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 737280 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:42.862733+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 729088 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:43.862803+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 729088 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:44.862896+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 729088 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:45.862996+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 720896 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:46.863108+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 720896 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:47.863244+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 712704 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:48.863352+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 720896 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:49.863477+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 720896 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:50.863598+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 712704 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:51.863712+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 712704 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:52.863836+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 712704 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:53.863952+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 704512 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:54.864059+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 704512 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:55.864179+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 696320 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:56.864323+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 696320 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:57.864481+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 696320 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:58.864602+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 679936 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:59.864812+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 679936 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:00.864950+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 671744 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:01.865543+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 671744 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:02.865724+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 671744 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:03.865884+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 671744 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:04.866018+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 663552 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:05.866157+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 663552 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:06.866331+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 655360 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:07.866546+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 655360 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:08.866712+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 647168 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:09.866893+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 647168 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:10.867025+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 647168 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:11.867171+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 638976 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:12.867329+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 638976 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:13.867468+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 630784 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:14.868027+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 630784 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:15.868139+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 630784 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:16.868266+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 622592 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:17.868424+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 622592 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:18.868549+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 614400 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:19.868686+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 614400 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:20.868824+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 614400 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:21.868952+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 606208 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:22.869076+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 606208 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:23.869192+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 589824 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:24.869315+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 589824 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:25.869453+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 589824 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:26.869586+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 581632 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:27.869807+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 581632 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:28.869930+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 565248 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:29.870038+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 565248 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:30.870153+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 565248 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:31.870269+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 557056 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:32.870435+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 557056 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:33.870577+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 565248 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:34.870726+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 557056 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:35.870856+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 557056 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:36.870972+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 557056 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:37.871116+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 548864 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:38.871231+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 548864 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:39.871338+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 540672 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:40.871448+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 540672 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:41.871548+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 540672 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:42.871652+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 532480 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:43.871748+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 524288 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:44.871904+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 524288 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:45.872033+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 516096 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:46.872144+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 516096 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:47.872339+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 516096 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:48.872445+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 516096 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:49.872567+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 516096 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:50.872682+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 516096 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:51.872925+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 507904 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:52.873047+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 507904 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5527 writes, 23K keys, 5527 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5527 writes, 849 syncs, 6.51 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5527 writes, 23K keys, 5527 commit groups, 1.0 writes per commit group, ingest: 18.26 MB, 0.03 MB/s
                                           Interval WAL: 5527 writes, 849 syncs, 6.51 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640ef9b11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:53.873173+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 434176 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:54.873306+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 434176 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:55.873423+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 434176 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:56.873534+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 425984 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:57.873677+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 425984 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:58.873809+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 425984 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:59.873953+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 417792 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:00.874105+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 417792 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:01.874267+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 409600 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:02.874414+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 409600 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:03.874556+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 409600 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:04.874691+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 401408 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:05.874823+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 385024 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:06.874953+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 385024 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:07.875121+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 376832 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:08.875264+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 376832 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:09.875375+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 368640 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:10.875477+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 352256 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:11.875592+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 352256 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:12.875753+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 344064 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:13.875874+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 344064 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:14.875990+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 344064 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:15.876103+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 335872 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:16.876206+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 335872 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:17.876332+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 327680 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:18.876444+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 335872 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:19.876550+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 335872 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:20.876659+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 327680 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:21.876767+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 327680 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:22.876865+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 319488 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:23.876992+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 319488 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:24.877114+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 311296 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:25.877209+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 311296 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:26.877314+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 311296 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:27.877452+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 303104 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:28.877563+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 303104 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:29.877684+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 303104 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:30.877812+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 294912 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:31.877918+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 294912 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:32.878032+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 294912 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:33.878140+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 286720 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:34.878243+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 286720 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:35.878343+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 278528 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:36.878464+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 278528 heap: 68182016 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 364.351928711s of 364.360015869s, submitted: 6
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:37.878587+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:38.878694+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:39.878794+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:40.878956+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:41.879083+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:42.879180+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:43.879313+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:44.879488+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 2007040 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:45.879589+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 1998848 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:46.879735+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 1998848 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:47.879914+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69337088 unmapped: 1990656 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:48.880008+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69337088 unmapped: 1990656 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:49.880119+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 1982464 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:50.880265+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 1982464 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:51.880430+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 1982464 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:52.880584+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 1974272 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:53.880732+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 1974272 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:54.880881+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 1966080 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:55.881006+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 1966080 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:56.881129+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 1966080 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:57.881297+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 1957888 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:58.881413+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 1949696 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:59.881532+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69386240 unmapped: 1941504 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:00.881644+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69386240 unmapped: 1941504 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:01.881744+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69394432 unmapped: 1933312 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:02.881859+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69394432 unmapped: 1933312 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:03.881973+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69402624 unmapped: 1925120 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:04.882722+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 1916928 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:05.882809+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 1908736 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:06.882908+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 1900544 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:07.883019+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 1900544 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:08.883114+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 1900544 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:09.883220+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 1892352 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:10.883332+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 1892352 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:11.883448+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 1892352 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:12.883565+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 1884160 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:13.883669+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 1875968 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:14.883770+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 1875968 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:15.883878+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 1875968 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:16.883988+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 1867776 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:17.884112+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 1867776 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:18.884222+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69468160 unmapped: 1859584 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:19.884369+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69468160 unmapped: 1859584 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:20.884476+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 1851392 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:21.884605+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 1851392 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:22.884718+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 1851392 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:23.884790+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69484544 unmapped: 1843200 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:24.884899+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69484544 unmapped: 1843200 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:25.884996+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69484544 unmapped: 1843200 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:26.885086+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 1835008 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:27.885191+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 1835008 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:28.885347+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1826816 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:29.885544+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1826816 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:30.885697+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1826816 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:31.885844+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1818624 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:32.885998+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1818624 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:33.886141+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1810432 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:34.886311+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1810432 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:35.886470+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1810432 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:36.886602+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 1802240 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:37.886740+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 1802240 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:38.886874+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1794048 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:39.886976+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1794048 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:40.887155+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1785856 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:41.887255+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1785856 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:42.887358+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1785856 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:43.887456+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 1777664 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:44.887556+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 1777664 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:45.887651+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 1769472 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:46.887745+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 1769472 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:47.887892+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 1769472 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:48.887995+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 1761280 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:49.888101+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 1761280 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:50.888200+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 1761280 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:51.888301+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:52.888415+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:53.888577+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:54.888685+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:55.888815+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:56.888934+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:57.889067+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:58.889172+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:59.889269+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1753088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:00.889555+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:01.889659+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:02.889783+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:03.889885+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:04.889985+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:05.890091+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:06.890194+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:07.890345+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:08.890443+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:09.890578+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:10.890681+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:11.890791+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:12.890908+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1744896 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:13.891008+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:14.891129+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:15.891220+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:16.891316+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:17.891423+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:18.891511+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:19.891614+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:20.891735+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:21.891863+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:22.891958+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:23.892052+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:24.892146+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:25.892287+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:26.892397+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1736704 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:27.892536+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1728512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:28.892649+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1728512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:29.892807+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1728512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:30.892899+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1728512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:31.893003+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1728512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:32.893127+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1720320 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:33.893247+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1720320 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:34.893342+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1720320 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:35.893440+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1720320 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:36.893521+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1720320 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:37.893623+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 1712128 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:38.893710+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:39.893792+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:40.893889+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:41.893991+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:42.894089+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:43.894186+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:44.894284+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:45.894409+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:46.894511+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:47.894658+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:48.894802+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:49.894998+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:50.895120+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:51.895234+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:52.895346+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:53.895442+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:54.895586+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:55.895715+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:56.895873+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:57.896063+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:58.896197+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:59.896351+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:00.896483+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:01.896614+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:02.896783+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:03.896938+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:04.897102+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1703936 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:05.897212+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:06.897343+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:07.897491+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:08.897615+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:09.897709+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:10.897973+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:11.898064+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:12.898164+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:13.898265+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:14.898368+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:15.898463+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:16.898558+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:17.898659+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:18.898793+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:19.898938+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:20.899054+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:21.899188+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:22.899319+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:23.899446+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:24.899578+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:25.899734+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:26.899853+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1695744 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:27.899986+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1687552 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:28.900081+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1687552 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:29.900198+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1687552 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:30.900330+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1687552 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:31.900460+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:32.900595+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:33.900698+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:34.900833+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:35.900962+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:36.901067+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:37.901186+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:38.901286+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:39.901386+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:40.901481+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:41.901582+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:42.901647+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:43.901738+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:44.901882+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:45.902027+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:46.902223+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:47.902348+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:48.902448+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:49.902547+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:50.902676+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:51.902790+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:52.902915+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:53.903028+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:54.903156+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:55.903291+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:56.903419+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:57.903549+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:58.903691+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:59.903796+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:00.903929+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:01.904068+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:02.904186+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:03.904310+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:04.904439+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:05.904534+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:06.904635+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:07.904806+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:08.904937+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:09.905089+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:10.905208+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:11.905302+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:12.905426+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:13.905573+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:14.905728+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1679360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:15.905864+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:16.905975+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:17.906093+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:18.906212+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:19.906336+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:20.906432+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:21.906562+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:22.906711+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:23.906864+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:24.907020+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:25.907136+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:26.907293+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:27.907453+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:28.907589+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:29.907697+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:30.907959+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:31.908117+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:32.908281+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:33.908442+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:34.908595+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:35.908794+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:36.908972+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:37.909134+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:38.909261+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:39.909391+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:40.909539+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:41.909647+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1671168 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:42.909770+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1662976 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:43.909915+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1662976 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:44.910050+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1662976 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:45.910179+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1662976 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:46.910314+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1662976 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:47.910439+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1654784 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:48.910560+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:49.910694+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:50.910793+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:51.910903+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:52.911022+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:53.911143+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:54.911258+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:55.911379+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:56.911527+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1646592 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:57.911697+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: mgrc ms_handle_reset ms_handle_reset con 0x5640f0959c00
Nov 26 12:58:05 compute-0 ceph-osd[90297]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3303149021
Nov 26 12:58:05 compute-0 ceph-osd[90297]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3303149021,v1:192.168.122.100:6801/3303149021]
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: get_auth_request con 0x5640f07e2800 auth_method 0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: mgrc handle_mgr_configure stats_period=5
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:58.911833+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:59.911968+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:00.912107+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:01.912266+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:02.912417+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:03.912572+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:04.912705+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:05.912829+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:06.912957+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:07.913108+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:08.913273+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:09.913407+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:10.913533+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:11.913662+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:12.913877+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:13.913982+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:14.914122+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:15.914270+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:16.914422+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:17.914611+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:18.914729+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:19.914870+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:20.914973+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:21.915108+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:22.915243+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:23.915399+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:24.915539+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:25.915703+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:26.915864+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:27.916058+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:28.916212+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:29.916367+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:30.916533+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:31.916677+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:32.916824+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1564672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:33.916977+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1556480 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:34.917110+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1556480 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:35.917263+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1556480 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:36.917435+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1556480 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:37.917628+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1556480 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:38.917742+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1548288 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:39.917932+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1548288 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:40.918103+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1548288 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:41.918278+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1548288 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:42.918468+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1548288 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:43.918639+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:44.918775+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:45.918924+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:46.919090+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:47.919294+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:48.919472+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:49.923145+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:50.923389+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:51.923617+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:52.923867+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:53.924046+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:54.924224+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:55.924372+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:56.924510+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:57.924671+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:58.924842+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:59.925003+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1540096 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:00.925161+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1531904 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:01.925362+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1531904 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:02.925548+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1531904 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:03.925776+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1531904 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:04.925983+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1531904 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:05.926193+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1523712 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:06.926404+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1523712 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:07.926625+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1523712 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:08.926835+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:09.926991+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:10.927182+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:11.927355+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:12.927883+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:13.928385+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:14.928568+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:15.928682+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:16.928815+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:17.928967+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:18.929111+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:19.929242+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:20.929344+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:21.929474+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:22.929580+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:23.929707+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:24.929861+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:25.929996+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:26.930149+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:27.930312+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:28.930467+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:29.930592+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:30.930701+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:31.930865+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:32.931000+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:33.931165+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:34.931306+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:35.931457+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:36.931628+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:37.931805+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:38.931948+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:39.932068+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:40.932192+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:41.932307+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:42.932447+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:43.932585+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:44.932721+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:45.932826+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:46.932928+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:47.933073+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:48.933194+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:49.933306+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:50.933414+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:51.933560+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:52.933689+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:53.933814+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:54.933965+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:55.934099+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:56.934231+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:57.934426+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:58.934558+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:59.934725+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:00.934847+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:01.934966+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:02.935079+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1515520 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:03.935224+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69820416 unmapped: 1507328 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:04.935338+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69820416 unmapped: 1507328 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:05.935460+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:06.935611+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:07.935774+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:08.935914+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:09.936047+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:10.936200+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:11.936351+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:12.936493+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:13.936643+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:14.936817+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:15.936968+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:16.937127+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:17.937270+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:18.937414+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:19.937554+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:20.937728+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:21.937906+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:22.938045+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:23.938192+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:24.938301+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:25.938440+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:26.938579+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:27.938748+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:28.938907+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:29.939026+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:30.939147+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:31.939275+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:32.939408+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1499136 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:33.939523+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:34.939695+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:35.939841+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:36.939962+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:37.940150+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:38.940295+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:39.940477+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:40.940629+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:41.940794+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:42.940941+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:43.941082+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:44.941224+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:45.941361+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:46.941492+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:47.941656+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:48.941797+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1482752 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:49.941906+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1482752 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:50.942055+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1482752 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:51.942206+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1482752 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:52.942369+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1482752 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:53.942503+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1482752 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:54.942654+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1482752 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:55.942792+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1482752 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:56.942898+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1482752 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:57.943044+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1482752 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:58.943186+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:59.943300+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:00.943446+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:01.943708+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:02.943874+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:03.944060+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:04.944203+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:05.944363+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:06.944483+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:07.944646+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:08.944748+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:09.944889+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:10.945005+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:11.945129+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:12.945260+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:13.945370+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:14.945480+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:15.945595+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:16.945724+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:17.945924+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:18.946056+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:19.946188+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:20.946333+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:21.946457+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:22.946614+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:23.946785+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:24.946912+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:25.947040+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:26.947167+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:27.947333+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:28.947461+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:29.947610+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:30.947750+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:31.947902+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:32.948009+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:33.948162+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:34.948314+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:35.948443+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:36.948616+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:37.948846+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:38.949026+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:39.949197+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:40.949345+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:41.949503+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:42.949633+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:43.949789+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:44.949937+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:45.950075+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:46.950230+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:47.950405+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:48.950536+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:49.950668+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:50.950819+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:51.950985+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:52.951133+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:53.951276+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:54.951432+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:55.951594+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:56.952511+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:57.952857+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:58.953058+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:59.953181+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:00.953325+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:01.953508+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:02.953670+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:03.953846+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1490944 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:04.954006+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1482752 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:05.954142+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1474560 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:06.954261+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1474560 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:07.954425+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1474560 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:08.954575+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1474560 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:09.954713+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1474560 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:10.954831+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1474560 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:11.954981+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1474560 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:12.955127+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1474560 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:13.955232+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1474560 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:14.955336+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1474560 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:15.955489+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1474560 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:16.955658+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1474560 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:17.955828+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1466368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:18.955957+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1466368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:19.956080+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1466368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:20.956979+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1466368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:21.957276+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1466368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:22.957395+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1466368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:23.957495+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1466368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:24.957620+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1466368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:25.957739+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1466368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:26.957801+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1466368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:27.957964+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1466368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:28.958085+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1466368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:29.958215+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1466368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:30.958326+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xad7ef/0x154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:05 compute-0 ceph-osd[90297]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1466368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: bluestore.MempoolThread(0x5640efa8fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784665 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:31.958446+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1466368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:32.958564+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1236992 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: do_command 'config diff' '{prefix=config diff}'
Nov 26 12:58:05 compute-0 ceph-osd[90297]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 26 12:58:05 compute-0 ceph-osd[90297]: do_command 'config show' '{prefix=config show}'
Nov 26 12:58:05 compute-0 ceph-osd[90297]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 26 12:58:05 compute-0 ceph-osd[90297]: do_command 'counter dump' '{prefix=counter dump}'
Nov 26 12:58:05 compute-0 ceph-osd[90297]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 26 12:58:05 compute-0 ceph-osd[90297]: do_command 'counter schema' '{prefix=counter schema}'
Nov 26 12:58:05 compute-0 ceph-osd[90297]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:33.958663+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 1892352 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: tick
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_tickets
Nov 26 12:58:05 compute-0 ceph-osd[90297]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:34.958790+0000)
Nov 26 12:58:05 compute-0 ceph-osd[90297]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 1785856 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:05 compute-0 ceph-osd[90297]: do_command 'log dump' '{prefix=log dump}'
Nov 26 12:58:05 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 12:58:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:58:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:58:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:58:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:58:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 12:58:05 compute-0 ceph-mgr[75236]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 12:58:05 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14465 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 12:58:06 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3440467166' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 12:58:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:58:06 compute-0 rsyslogd[962]: imjournal from <np0005536586:ceph-osd>: begin to drop messages due to rate-limiting
Nov 26 12:58:06 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:06 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14467 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 26 12:58:06 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350689612' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 12:58:06 compute-0 ceph-mon[74966]: from='client.14449 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:06 compute-0 ceph-mon[74966]: from='client.14453 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:06 compute-0 ceph-mon[74966]: from='client.14457 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:06 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/4162174485' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 12:58:06 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3440467166' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 12:58:06 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3350689612' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 12:58:06 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14471 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:06 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 26 12:58:06 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1849343515' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 26 12:58:07 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14479 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:07 compute-0 ceph-f7d7fe93-41e5-51c4-b72d-63b38686102e-mgr-compute-0-whkbdn[75232]: 2025-11-26T12:58:07.217+0000 7f35d37a6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 12:58:07 compute-0 ceph-mgr[75236]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 12:58:07 compute-0 ceph-mon[74966]: from='client.14461 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:07 compute-0 ceph-mon[74966]: from='client.14465 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:07 compute-0 ceph-mon[74966]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:07 compute-0 ceph-mon[74966]: from='client.14467 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:07 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1849343515' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 26 12:58:07 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 26 12:58:07 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1685656159' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 26 12:58:07 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 26 12:58:07 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623996138' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 26 12:58:07 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 26 12:58:07 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4141702285' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 26 12:58:07 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 26 12:58:07 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1596995339' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 26 12:58:08 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1485580640' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 26 12:58:08 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1867052214' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 26 12:58:08 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4118357802' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mon[74966]: from='client.14471 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mon[74966]: from='client.14479 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1685656159' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1623996138' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/4141702285' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1596995339' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1485580640' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1867052214' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/4118357802' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 26 12:58:08 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/383441113' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 26 12:58:08 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/598505242' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 26 12:58:08 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/922276148' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 26 12:58:08 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 26 12:58:08 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015517953' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 26 12:58:09 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 26 12:58:09 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2884655991' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 26 12:58:09 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 26 12:58:09 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640217898' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 26 12:58:09 compute-0 crontab[256076]: (root) LIST (root)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.041648 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.041694 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.041706 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960934639s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926368713s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960920334s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926368713s@ mbc={}] exit Reset 0.000026 1 0.000051
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960920334s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926368713s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960920334s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926368713s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960920334s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926368713s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960920334s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926368713s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960920334s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926368713s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.056179 16 0.000030
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.059040 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.059090 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.059112 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.943595886s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909278870s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.943580627s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909278870s@ mbc={}] exit Reset 0.000027 1 0.000041
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.943580627s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909278870s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.943580627s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909278870s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.943580627s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909278870s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.943580627s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909278870s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.943580627s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909278870s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.039465 10 0.000018
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.042195 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.042227 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.042238 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960314751s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926383972s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960297585s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926383972s@ mbc={}] exit Reset 0.000029 1 0.000044
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960297585s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926383972s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960297585s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926383972s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960297585s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926383972s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960297585s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926383972s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960297585s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926383972s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.031593 4 0.000018
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.034335 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.034394 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 3.034644 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968079567s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934272766s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968064308s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934272766s@ mbc={}] exit Reset 0.000026 1 0.000043
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968064308s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934272766s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968064308s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934272766s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968064308s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934272766s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968064308s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934272766s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.968064308s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934272766s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.039700 10 0.000020
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.042286 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.042494 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.042505 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960093498s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926414490s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960080147s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926414490s@ mbc={}] exit Reset 0.000165 1 0.000037
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960080147s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926414490s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960080147s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926414490s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960080147s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926414490s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960080147s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926414490s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.960080147s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926414490s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.040080 10 0.000018
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.042646 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.042704 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.042723 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.12] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959667206s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926391602s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.12] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959650040s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926391602s@ mbc={}] exit Reset 0.000037 1 0.000075
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959650040s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926391602s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959650040s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926391602s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959650040s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926391602s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959650040s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926391602s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959650040s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926391602s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.032204 4 0.000016
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.034751 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.034787 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 3.034800 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.12] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.967455864s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934303284s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.12] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.967440605s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934303284s@ mbc={}] exit Reset 0.000025 1 0.000037
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.967440605s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934303284s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.967440605s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934303284s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.967440605s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934303284s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.967440605s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934303284s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.967440605s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934303284s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.040631 10 0.000019
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.043045 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.043075 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.043086 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959156990s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926422119s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959138870s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926422119s@ mbc={}] exit Reset 0.000030 1 0.000049
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959138870s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926422119s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959138870s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926422119s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959138870s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926422119s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959138870s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926422119s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.959138870s) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926422119s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.032883 4 0.000024
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.035747 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.035783 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 3.035796 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966730118s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934188843s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966711044s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934188843s@ mbc={}] exit Reset 0.000032 1 0.000053
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966711044s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934188843s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966711044s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934188843s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966711044s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934188843s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966711044s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934188843s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966711044s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934188843s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.058227 16 0.000037
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.061089 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.061130 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.061146 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.040936 10 0.000020
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.043288 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.043332 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.043348 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958840370s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926429749s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.941487312s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909187317s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.941465378s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909187317s@ mbc={}] exit Reset 0.000209 1 0.000222
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.941465378s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909187317s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.941465378s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909187317s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.941465378s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909187317s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.941465378s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909187317s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.941465378s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909187317s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.033132 4 0.000019
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.036062 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.036105 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 3.036118 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.19] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966361046s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934234619s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.19] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966345787s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934234619s@ mbc={}] exit Reset 0.000026 1 0.000039
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966345787s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934234619s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966345787s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934234619s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966345787s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934234619s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966345787s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934234619s@ mbc={}] exit Start 0.000044 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.966345787s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934234619s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.962287903s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.925605774s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957566261s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925605774s@ mbc={}] exit Reset 0.004733 1 0.004741
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957566261s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925605774s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957566261s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925605774s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957566261s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925605774s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957566261s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925605774s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957566261s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.925605774s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.041505 10 0.000019
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.043797 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.043828 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.043840 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958274841s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926429749s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958259583s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] exit Reset 0.000030 1 0.000047
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958259583s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958259583s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958259583s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958259583s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958259583s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.041582 10 0.000018
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.043813 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.043847 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.043863 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958201408s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 100.926445007s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958189964s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926445007s@ mbc={}] exit Reset 0.000021 1 0.000034
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958189964s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926445007s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958189964s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926445007s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958189964s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926445007s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958189964s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926445007s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.958189964s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926445007s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.033657 4 0.000013
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.036496 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.036536 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 3.036548 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965899467s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934219360s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965879440s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934219360s@ mbc={}] exit Reset 0.000029 1 0.000040
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965879440s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934219360s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965879440s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934219360s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965879440s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934219360s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965879440s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934219360s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965879440s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934219360s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.033545 4 0.000019
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.036508 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.036556 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 3.036568 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965804100s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934226990s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965792656s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934226990s@ mbc={}] exit Reset 0.000022 1 0.000034
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965792656s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934226990s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965792656s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934226990s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965792656s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934226990s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965792656s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934226990s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965792656s) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934226990s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.041799 10 0.000019
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.043904 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.043938 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.043949 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957959175s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 100.926460266s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957945824s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926460266s@ mbc={}] exit Reset 0.000024 1 0.000043
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957945824s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926460266s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957945824s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926460266s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957945824s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926460266s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957945824s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926460266s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.957945824s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926460266s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.033987 4 0.000015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.036638 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.036694 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 3.036710 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.10] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965680122s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active pruub 102.934288025s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.10] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965665817s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934288025s@ mbc={}] exit Reset 0.000026 1 0.000039
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965665817s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934288025s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965665817s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934288025s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965665817s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934288025s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965665817s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934288025s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50 pruub=12.965665817s) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.934288025s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.059407 16 0.000035
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.062338 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.062543 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.062629 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.013027 2 0.000180
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940426826s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909156799s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940408707s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] exit Reset 0.000032 1 0.000087
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940408707s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940408707s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940408707s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940408707s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940408707s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.059444 16 0.000030
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.062236 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.062275 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.062288 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.011552 2 0.000018
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940348625s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909233093s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.1( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011812 2 0.000020
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.1( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.1( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.1( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.011749 2 0.000104
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.011521 2 0.000756
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011513 2 0.001498
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.012280 2 0.001151
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940299988s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909233093s@ mbc={}] exit Reset 0.000532 1 0.004508
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940299988s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909233093s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940299988s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909233093s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940299988s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909233093s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940299988s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909233093s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.940299988s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909233093s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.14] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.14] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.011566 2 0.000026
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.955779076s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] exit Reset 0.003077 1 0.002992
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.955779076s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.955779076s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.955779076s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.955779076s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50 pruub=10.955779076s) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.926429749s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.2] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.2] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 50 handle_osd_map epochs [50,50], i have 50, src has [1,50]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.2] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.2] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000017 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000097 1 0.000025
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000013 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000020
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000010 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000191 1 0.000020
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000014
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000547 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000100 1 0.000578
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000016 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000020
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000011 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000019
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.065231 16 0.000036
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.068119 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.068196 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.068220 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.934625626s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active pruub 98.909156799s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.934603691s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] exit Reset 0.000046 1 0.006413
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.934603691s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.934603691s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.934603691s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.934603691s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=8.934603691s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.909156799s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000023 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000010
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000098 1 0.000191
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000017 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001215 1 0.000020
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000016 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001260 1 0.001075
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000048 1 0.000025
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000029 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.4] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.4] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.12] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.12] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.12] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.12] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.14] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.14] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.10] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.10] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.17] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.17] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.4] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.4] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.19] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.19] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.10] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.10] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015384 2 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015843 2 0.000032
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016145 2 0.000019
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015102 2 0.000031
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015015 2 0.000022
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014913 2 0.000018
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012920 2 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011639 2 0.000072
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010353 2 0.000246
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009126 2 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.010236 2 0.000024
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 50 pg[10.14( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:20.083521+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:39:49.435062+0000 osd.1 (osd.1) 48 : cluster [DBG] 2.5 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:39:49.449316+0000 osd.1 (osd.1) 49 : cluster [DBG] 2.5 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 845286 data_alloc: 218103808 data_used: 200704
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 50 handle_osd_map epochs [50,51], i have 50, src has [1,51]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 50 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.908692 3 0.000022
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.908719 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.911476 3 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.911497 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.912574 3 0.000035
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.912594 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000043 1 0.000094
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000048 1 0.000064
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000040 1 0.000054
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000034
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000023 1 0.000030
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 1 0.000042
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000030 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913042 6 0.000031
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913066 3 0.000024
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.913084 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000025 1 0.000034
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913745 3 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.913762 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000015 1 0.000034
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000041 1 0.000050
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.907971 2 0.000035
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.920379 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000016 1 0.000030
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.908102 2 0.000183
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.919921 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.917552 3 0.000036
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.917567 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.915971 6 0.000031
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000025 1 0.000034
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.919098 3 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.919114 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000013 1 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000023 1 0.000032
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.889027 2 0.000017
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.904259 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.918298 3 0.000020
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.918314 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000024 1 0.000033
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.908052 2 0.000086
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 0.919698 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.917625 3 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.917645 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000029 1 0.000042
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.889316 2 0.000021
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.902526 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.909136 2 0.000031
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.920778 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.916637 3 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.916657 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.916567 3 0.000021
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.909570 2 0.000019
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.916589 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.921446 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000035 1 0.000107
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.890346 2 0.000030
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.906553 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.909532 2 0.000024
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.922323 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000508 1 0.000520
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.890238 2 0.000079
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.901888 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.916318 3 0.001063
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.916334 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000058 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000033 1 0.000042
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.909890 2 0.000022
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.922976 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.910560 2 0.000046
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.923829 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.890763 2 0.000026
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.899964 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.916032 3 0.000081
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.916054 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.892222 2 0.000022
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.907730 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000034 1 0.000048
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.891334 2 0.000106
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.906399 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913677 3 0.000171
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.891618 2 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.913696 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.906698 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000028 1 0.000038
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.891813 2 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.907878 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909681 3 0.000511
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.909708 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000030 1 0.000045
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.890888 2 0.000104
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.901198 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 44'54 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.912003 3 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.912021 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.891852 2 0.000019
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.904762 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000025 1 0.000035
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001443 2 0.000024
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002696 2 0.000057
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000027 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001512 2 0.000402
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.918624 7 0.000032
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001611 2 0.000546
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000128 2 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003284 2 0.000026
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000320 2 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000871 2 0.000024
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003174 2 0.000024
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000557 2 0.000024
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.003555 4 0.000059
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.003525 4 0.000051
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000218 1 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005017 3 0.000046
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.12( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.004977 4 0.000042
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004631 3 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.004601 4 0.000033
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000030 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.f( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 51 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.916506 7 0.000185
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.919375 7 0.000048
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.917983 7 0.000032
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008224 4 0.000103
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.b( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008118 4 0.000053
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.2( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 44'54 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008563 4 0.000518
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.008361 5 0.000049
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008476 4 0.000089
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007996 4 0.000044
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007910 4 0.000034
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.19( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007793 4 0.000092
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007606 4 0.000156
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.10( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007553 4 0.000091
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000083 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.6( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000095 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007352 4 0.000032
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.1a( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 44'54 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.007427 4 0.000086
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.008991 5 0.000092
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 44'54 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.925246 7 0.000029
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.929820 7 0.000027
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.929907 7 0.000050
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000195 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000106 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.13( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.11( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.919794 7 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.925758 7 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.925425 7 0.000100
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.923045 7 0.000071
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.929158 7 0.000026
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.925737 7 0.000413
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.920877 7 0.000080
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.928679 7 0.000078
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.928767 7 0.000046
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.928281 7 0.000233
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.925915 7 0.000054
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.926980 7 0.000026
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.928897 7 0.000312
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.925187 7 0.000033
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.929493 7 0.000037
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.930675 7 0.000051
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.926733 7 0.000108
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.930394 7 0.000033
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.930630 7 0.000031
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.921151 7 0.000185
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.927214 7 0.000276
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.930464 7 0.000045
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.935339 7 0.000049
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.934109 7 0.000051
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.931302 7 0.000037
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.933967 7 0.000038
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.932847 7 0.000041
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.934884 7 0.000164
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.932927 7 0.000048
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.933453 7 0.000039
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.934276 7 0.000029
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.935498 7 0.000030
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.929698 7 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.931880 7 0.000044
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.934271 7 0.000045
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.926698 7 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.926313 7 0.000210
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.927787 7 0.000058
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.928214 7 0.000055
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.929013 7 0.000058
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.932948 7 0.000052
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.931304 7 0.000026
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.930335 7 0.000026
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.937012 7 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.932586 7 0.000027
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.927193 7 0.000491
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.931911 7 0.000031
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.938742 7 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.927285 7 0.000541
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.938665 7 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.935595 7 0.000029
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.933954 7 0.000032
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.938251 7 0.000025
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.934658 7 0.000065
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.938517 7 0.000054
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 49) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:39:49.435062+0000 osd.1 (osd.1) 48 : cluster [DBG] 2.5 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:39:49.449316+0000 osd.1 (osd.1) 49 : cluster [DBG] 2.5 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.087407 3 0.000017
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.087438 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:21.083619+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.132562 2 0.000055
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.132598 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.198177 2 0.000035
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.198426 2 0.000221
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000010 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.204102 3 0.000018
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.204132 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.133916 1 0.000089
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.330639 2 0.000014
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000015 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.205725 1 0.000111
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.536478 2 0.000009
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000015 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.540379 2 0.000013
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.540404 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fdc9a000/0x0/0x4ffc00000, data 0xb1287/0x123000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.137267 1 0.000084
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.668821 1 0.000060
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.096109 1 0.000048
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.764719 1 0.000038
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.118572 1 0.000066
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 44'54 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.883353 2 0.000071
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 44'54 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 44'54 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 lc 44'54 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000037 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/43 les/c/f=51/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.010534 1 0.000049
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.897312 1 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[10.14( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [1] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.897369 1 0.000062
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.894083 1 0.000021
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.894126 1 0.000075
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.14] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.894115 1 0.000026
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.894166 1 0.000085
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893856 1 0.000468
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893633 1 0.000050
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893635 1 0.000021
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893660 1 0.000014
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893704 1 0.000012
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893734 1 0.000013
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.19] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893766 1 0.000013
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893794 1 0.000012
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893313 1 0.000055
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893304 1 0.000042
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893328 1 0.000044
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893344 1 0.000014
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893391 1 0.000052
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893431 1 0.000039
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893210 1 0.000031
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.4] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893254 1 0.000009
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893295 1 0.000010
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893344 1 0.000041
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.10] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893378 1 0.000046
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893443 1 0.000050
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.10] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.893494 1 0.000124
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.14] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.889872 1 0.000056
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.889948 1 0.000024
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.2] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.889995 1 0.000022
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.3( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.890030 1 0.000012
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.890178 1 0.000015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.890256 1 0.000013
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.890319 1 0.000133
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.d( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.889395 1 0.000060
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.889655 1 0.000046
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.2( v 44'2 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.2] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.889728 1 0.000095
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.889825 1 0.000026
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1f( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.889874 1 0.000020
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.889997 1 0.000016
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.890101 1 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.890438 1 0.000015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.890569 1 0.000041
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.12( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.12] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.890610 1 0.000021
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.8( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.890659 1 0.000063
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.18( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.890704 1 0.000099
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.890815 1 0.000112
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1a( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.889405 1 0.000030
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.889402 1 0.000021
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1b( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.889460 1 0.000062
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.11( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.889425 1 0.000042
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.889470 1 0.000064
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1e( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.883823 1 0.000029
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.4] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.883879 1 0.000086
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.883917 1 0.000081
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.12] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.883934 1 0.000022
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1c( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.884032 1 0.000095
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.883981 1 0.000068
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.15( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.883987 1 0.000063
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.884002 1 0.000096
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.821856 1 0.000098
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.17] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.768600 1 0.000072
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.9( v 44'2 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.705472 1 0.000078
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.365716 1 0.000070
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.007487 1 0.000117
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.904823 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.821358 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014990 1 0.000082
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.912393 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.831813 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022106 1 0.000040
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.916215 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.841480 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029451 1 0.000021
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.923600 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.853441 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.14] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036838 1 0.000022
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.930991 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.850815 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044563 1 0.000021
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.938779 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.868748 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051514 1 0.000044
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.945407 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.871642 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058894 1 0.000044
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.952578 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.878049 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.13( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066204 1 0.000063
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.13( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.959867 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.13( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.882947 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.073652 1 0.000059
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.967340 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.896521 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.080970 1 0.000019
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.974697 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[7.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.900823 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088319 1 0.000020
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.982077 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.903015 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.19] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994825 3 0.000029
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.998144 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995018 3 0.000025
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.998232 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995400 3 0.000027
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.997489 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998471 4 0.000034
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.998781 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995323 3 0.000050
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.998303 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995408 3 0.000026
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.997416 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995794 3 0.000025
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.996693 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999322 4 0.000045
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995640 3 0.000033
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999409 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999455 4 0.000034
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999528 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.997472 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999855 4 0.000053
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000035 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996356 3 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.996939 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999939 4 0.000047
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000339 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996768 3 0.000024
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000478 4 0.000056
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.997121 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000557 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996948 3 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.997142 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.c( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.096271 4 0.000022
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.c( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.990060 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.c( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.918759 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.3( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.103107 4 0.000018
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.3( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.996923 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.3( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.925710 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.e( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.110426 4 0.000020
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.e( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.003776 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.e( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.932089 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.6( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.117785 4 0.000020
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.6( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.011124 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.6( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.937070 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.9( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.125222 4 0.000018
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.9( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.018575 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.9( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.945597 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.1f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.132488 4 0.000020
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.1f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.025858 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.1f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.951064 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.e( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.139849 4 0.000019
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.e( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.033291 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.e( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.962476 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.f( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.147175 4 0.000064
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.f( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.040632 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.f( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.970171 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.4( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.154596 4 0.000057
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.4( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.047833 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.4( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.974652 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.4] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.161912 4 0.000035
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.055195 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.985842 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.169283 4 0.000060
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.062604 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.990087 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.10( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.176609 4 0.000056
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.10( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.069978 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.10( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.000404 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.10] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.183977 4 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.077379 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.007893 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.10( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.191371 4 0.000026
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.10( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.084876 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.10( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.006064 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.10] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.1a( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.198310 4 0.000057
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.1a( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.088216 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.1a( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.023597 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.2( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.205683 4 0.000022
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.2( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.095659 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.2( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.029791 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.2] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.14( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.213405 4 0.000387
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.14( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.107029 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.14( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.037738 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.14] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.3( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.220425 4 0.000019
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.3( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.110447 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.3( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.041775 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.227743 4 0.000021
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.117804 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.051798 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:22.083721+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.235052 4 0.000022
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.125260 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.058135 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.15( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.242334 4 0.000038
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.15( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.132629 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.15( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.067668 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.249502 4 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.138931 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.073240 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.d( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.256977 4 0.000091
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.d( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.147348 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.d( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.080412 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.b( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.263920 4 0.000032
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.b( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.153701 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.b( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.087203 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.2( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.271501 4 0.000077
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.2( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.161190 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.2( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.096733 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.2] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.1f( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.278711 4 0.000043
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.1f( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.168563 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.1f( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.098292 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.285883 4 0.000119
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.175886 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.107815 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.293211 4 0.000096
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.183256 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.109602 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.11( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.300217 4 0.000344
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.11( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.190643 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.11( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.118465 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.1c( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.307490 4 0.000246
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.1c( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.198028 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.1c( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.127078 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.12( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.314863 4 0.000033
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.12( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.205478 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.12( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.133721 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.12] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.8( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.322203 4 0.000055
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.8( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.212869 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.8( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.145859 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.18( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.329508 4 0.000032
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.18( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.220212 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.18( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.151593 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.d( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.336889 4 0.000042
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.d( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.227715 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.d( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.162022 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.1a( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.344159 4 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.1a( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.235100 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.1a( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.161863 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.351563 4 0.000020
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.240994 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.178027 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.1b( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.358975 4 0.000017
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.1b( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.248398 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.1b( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.181010 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.11( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.366313 4 0.000017
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.11( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.255795 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.11( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.186157 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.373754 4 0.000015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.263228 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.190441 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.1e( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.381111 4 0.000017
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.1e( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.270604 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.1e( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.202574 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.4( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.388368 4 0.000058
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.4( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.272225 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.4( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=1 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.210911 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.4] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.395792 4 0.000077
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.279704 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.218483 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.12( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.403039 4 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.12( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.286982 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.12( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.222664 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.12] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.1c( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.410416 4 0.000018
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.1c( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.294373 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.1c( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.232647 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.417808 4 0.000019
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.301873 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.229688 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.15( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.425169 4 0.000017
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.15( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.309182 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.15( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.243199 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.432558 4 0.000018
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.316565 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[7.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.251325 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 1335296 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.1b( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.439913 4 0.000027
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.1b( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.323947 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.1b( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [2] r=-1 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.262565 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.17( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.476849 5 0.000072
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.17( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 1.298755 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.17( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.302214 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.17] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.9( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 DELETING pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.484196 5 0.000060
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.9( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 1.252836 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[11.9( v 44'2 (0'0,44'2] lb MIN local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [2] r=-1 lpr=50 pi=[47,50)/1 luod=0'0 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.303470 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[11.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.18( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.491487 5 0.000084
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.18( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 1.197011 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.18( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.314237 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 DELETING pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.506522 5 0.000104
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.872291 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[8.f( v 33'4 (0'0,33'4] lb MIN local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=-1 lpr=50 pi=[45,50)/1 luod=0'0 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 2.331354 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[8.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=0 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000031 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=0 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000120 1 0.000030
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 52 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=0 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=0 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000027
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=0 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000023 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=0 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000046 1 0.000024
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 52 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=0 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000066 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=0 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000024
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.938763 5 0.000208
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.937907 5 0.000563
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.940131 5 0.000324
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.939395 5 0.000731
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.939614 5 0.000450
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.938952 5 0.000862
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.939771 5 0.000349
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.939723 5 0.000488
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.939256 5 0.000133
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000207 1 0.000022
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001168 2 0.000029
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.938764 5 0.000254
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.939554 5 0.000173
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.941005 5 0.000109
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.939129 5 0.000089
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001454 2 0.000029
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001238 2 0.000034
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000841 1 0.000012
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.939867 5 0.001457
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.940474 5 0.000120
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002292 2 0.000330
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000019 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.940012 5 0.001068
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.040456 2 0.000016
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.041520 1 0.000009
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000287 1 0.000155
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038359 2 0.000063
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.080322 1 0.000033
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000228 1 0.000043
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.024349 2 0.000049
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.104946 1 0.000009
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000221 1 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 52 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.066179 1 0.000120
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.086776 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.085056 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.146353 2 0.000037
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.147725 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.085186 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.145913 2 0.000089
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.147488 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.853003502s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815834045s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852621078s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815834045s@ mbc={}] exit Reset 0.000410 1 0.000689
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852621078s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815834045s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852621078s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815834045s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852621078s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815834045s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852621078s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815834045s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852621078s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815834045s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.145747 2 0.000800
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.148541 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 lc 33'17 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.105939 1 0.000055
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.086294 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.083266 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.083279 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.146161 2 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852263451s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815780640s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.147805 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.042664 1 0.000198
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.085909 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.083072 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.083086 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852128983s) [0] async=[0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815803528s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852112770s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815803528s@ mbc={}] exit Reset 0.000029 1 0.000042
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852112770s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815803528s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852112770s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815803528s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852112770s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815803528s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852112770s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815803528s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852112770s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815803528s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852163315s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815780640s@ mbc={}] exit Reset 0.000114 1 0.000138
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852163315s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815780640s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852163315s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815780640s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852163315s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815780640s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852163315s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815780640s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53 pruub=15.852163315s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815780640s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001747 3 0.000189
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000061 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 lc 33'17 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 53 handle_osd_map epochs [53,53], i have 53, src has [1,53]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.004057 4 0.000515
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003446 4 0.000655
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000040 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 lc 33'17 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.003731 3 0.000152
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 lc 33'17 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000555 2 0.000620
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+recovering+remapped mbc={255={(0+1)=1}}] exit Started/Primary/Active/Recovering 0.046440 4 0.000107
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=1}}] enter Started/Primary/Active/NotRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=1}}] exit Started/Primary/Active/NotRecovering 0.000252 1 0.000170
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000367 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:23.083807+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.075507 1 0.000541
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 lc 33'17 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.076033 3 0.000805
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 lc 33'17 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 lc 33'17 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000029 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000049 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=52/53 n=2 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 lc 33'17 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.059090 1 0.000199
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000049 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.287002 4 0.000013
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/43 les/c/f=53/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000418 1 0.000091
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.054756 2 0.000233
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.342291 4 0.000015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000216 1 0.000037
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052606 2 0.000061
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.395180 4 0.000019
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000219 1 0.000035
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 1236992 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059683 2 0.000036
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.455130 4 0.000018
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000283 1 0.000025
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059677 2 0.000034
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.515119 4 0.000014
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000292 1 0.000155
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.053426 2 0.000055
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.568649 4 0.000185
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000251 1 0.000031
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.017278 2 0.000051
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.586047 4 0.000054
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000323 1 0.000039
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038378 2 0.000053
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.624798 4 0.000102
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000369 1 0.000222
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031188 2 0.000067
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.656556 4 0.000038
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000339 1 0.000051
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031314 2 0.000051
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.687945 4 0.000027
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000341 1 0.000022
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038374 2 0.000054
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.726691 4 0.000082
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000245 1 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052597 2 0.000054
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.779200 4 0.000264
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000279 1 0.000036
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038472 2 0.000053
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.667352 1 0.000132
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000507 1 0.000029
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.009996 2 0.000052
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.494060 1 0.000154
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.091781 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.089948 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 2.548410 7 0.000048
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 3.089889 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 4.009627 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 4.009686 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.090098 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.915017128s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active pruub 106.882560730s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.848460197s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816101074s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.848379135s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816101074s@ mbc={}] exit Reset 0.000157 1 0.000371
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.848379135s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816101074s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.848379135s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816101074s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.848379135s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816101074s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.696642 1 0.000046
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.091634 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.089959 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 54 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914970398s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882560730s@ mbc={}] exit Reset 0.000353 1 0.000089
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914970398s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882560730s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914970398s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882560730s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.848379135s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816101074s@ mbc={}] exit Start 0.000048 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.848379135s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816101074s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.090107 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914970398s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882560730s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847923279s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815940857s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914970398s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882560730s@ mbc={}] exit Start 0.000127 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914970398s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882560730s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847882271s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815940857s@ mbc={}] exit Reset 0.000063 1 0.000223
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847882271s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815940857s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847882271s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815940857s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847882271s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815940857s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847882271s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815940857s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847882271s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815940857s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.636572 1 0.000119
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.091935 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.090743 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.090766 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 2.411684 7 0.000050
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 3.090140 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 4.010926 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 4.010940 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914413452s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active pruub 106.882606506s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.322135 1 0.000034
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.091670 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.089102 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.089285 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847566605s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815841675s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847519875s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815841675s@ mbc={}] exit Reset 0.000063 1 0.000085
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847519875s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815841675s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847519875s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815841675s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847519875s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815841675s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847519875s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815841675s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847519875s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815841675s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914331436s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882606506s@ mbc={}] exit Reset 0.000100 1 0.000124
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914331436s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882606506s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914331436s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882606506s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914331436s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882606506s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914331436s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882606506s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.914331436s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.882606506s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.810054 1 0.000183
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.092275 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.089773 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.089787 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847449303s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815879822s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847414017s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815879822s@ mbc={}] exit Reset 0.000046 1 0.000063
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847414017s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815879822s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847414017s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815879822s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847414017s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815879822s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 2.197205 7 0.000105
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 3.089649 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 4.012021 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 4.012146 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.918402672s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active pruub 106.887001038s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.918376923s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887001038s@ mbc={}] exit Reset 0.000039 1 0.000057
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847307205s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815971375s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847414017s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815879822s@ mbc={}] exit Start 0.000232 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847414017s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815879822s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.757560 1 0.000092
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.091770 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.089624 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.089637 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847074509s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815902710s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847105026s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815971375s@ mbc={}] exit Reset 0.000409 1 0.001088
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.918376923s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887001038s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 2.316216 7 0.000031
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.425070 1 0.000086
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.092024 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.091702 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846937180s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815902710s@ mbc={}] exit Reset 0.000151 1 0.000172
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.091717 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846937180s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815902710s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846937180s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815902710s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846937180s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815902710s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846937180s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815902710s@ mbc={}] exit Start 0.000008 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846937180s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815902710s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847107887s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816116333s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847078323s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816116333s@ mbc={}] exit Reset 0.000046 1 0.000137
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847078323s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816116333s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847078323s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816116333s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847078323s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816116333s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847078323s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816116333s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847078323s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816116333s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847105026s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815971375s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847105026s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815971375s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 3.089639 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 4.013633 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847105026s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815971375s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 4.013702 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.918376923s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887001038s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.918376923s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887001038s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.918376923s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887001038s@ mbc={}] exit Start 0.000284 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.918376923s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887001038s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847105026s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815971375s@ mbc={}] exit Start 0.000114 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847105026s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815971375s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.372604 1 0.000125
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.584283 1 0.000078
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.092650 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.092190 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.092215 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.917899132s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active pruub 106.887329102s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.917868614s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887329102s@ mbc={}] exit Reset 0.000193 1 0.000560
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.917868614s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887329102s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.917868614s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887329102s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.917868614s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887329102s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.917868614s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887329102s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.917868614s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.887329102s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.527962 1 0.000037
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.092416 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.092467 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.092482 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846574783s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816085815s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.333970 1 0.000057
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.092160 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.092513 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.092527 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847175598s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816719055s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.464562 1 0.000056
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.092047 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.089177 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.089190 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.093027 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846515656s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816139221s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.089738 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.089751 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846462250s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816085815s@ mbc={}] exit Reset 0.000131 1 0.000149
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846462250s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816085815s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846462250s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816085815s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846462250s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816085815s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847414970s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.817054749s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846462250s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816085815s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846462250s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816085815s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846486092s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816139221s@ mbc={}] exit Reset 0.000040 1 0.000052
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846486092s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816139221s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846486092s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816139221s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846486092s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816139221s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846486092s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816139221s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846486092s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816139221s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847385406s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.817054749s@ mbc={}] exit Reset 0.000041 1 0.000329
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847385406s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.817054749s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847385406s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.817054749s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847385406s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.817054749s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847385406s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.817054749s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.847385406s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.817054749s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.566972 1 0.000063
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.092144 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.092712 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.092726 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[45,51)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846344948s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.816062927s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846327782s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816062927s@ mbc={}] exit Reset 0.000029 1 0.000043
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846327782s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816062927s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846327782s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816062927s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846327782s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816062927s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846327782s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816062927s@ mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846327782s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816062927s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846558571s) [0] async=[0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.815994263s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846675873s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816719055s@ mbc={}] exit Reset 0.000512 1 0.000526
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846675873s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816719055s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846675873s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816719055s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846675873s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816719055s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846675873s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816719055s@ mbc={}] exit Start 0.000042 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.846675873s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.816719055s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.845700264s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815994263s@ mbc={}] exit Reset 0.000872 1 0.000900
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.845700264s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815994263s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.845700264s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815994263s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.845700264s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815994263s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.845700264s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815994263s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54 pruub=14.845700264s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.815994263s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 54 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013987 7 0.000782
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013936 7 0.000179
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014459 7 0.000515
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000045 1 0.000034
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000115 1 0.000033
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000164 1 0.000113
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038195 2 0.000234
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038275 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.052291 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:24.083900+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.075322 2 0.000108
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.075468 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.089431 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.112214 2 0.000076
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112418 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.126947 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.841435432s of 10.071040154s, submitted: 837
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016079 6 0.000467
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015765 6 0.000110
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014670 6 0.000031
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014906 6 0.000637
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018300 7 0.000036
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019365 7 0.000461
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000041 1 0.000029
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000203 1 0.000127
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018995 7 0.000030
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018569 7 0.000201
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019789 7 0.000057
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000046 1 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000074 1 0.000013
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019636 7 0.000428
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000096 1 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000045 1 0.000043
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020305 7 0.000270
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019363 7 0.000102
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020693 7 0.000035
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019354 7 0.000071
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019412 7 0.000036
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000044 1 0.000020
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018733 7 0.000038
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020960 7 0.000039
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000078 1 0.000011
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000103 1 0.000010
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000138 1 0.000018
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000155 1 0.000009
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000159 1 0.000062
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000205 1 0.000069
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 55 heartbeat osd_stat(store_statfs(0x4fcaf5000/0x0/0x4ffc00000, data 0xb6950/0x128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [0,0,0,0,0,1])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:25.083988+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:39:54.425499+0000 osd.1 (osd.1) 50 : cluster [DBG] 2.6 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:39:54.439558+0000 osd.1 (osd.1) 51 : cluster [DBG] 2.6 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.191438 2 0.000110
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.191524 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.209932 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.220824 2 0.000178
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.221091 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.240767 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.234483 2 0.000088
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.234552 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.253573 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 704519 data_alloc: 218103808 data_used: 110592
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.271570 2 0.000079
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.271738 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.290457 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.323369 2 0.000140
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.323579 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.343417 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.382529 2 0.000081
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.382604 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.402557 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.419442 2 0.000107
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.419509 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.440071 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.449051 2 0.000062
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.449155 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.468538 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.500897 2 0.000091
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.501023 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.521735 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.552648 2 0.000077
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.552814 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.572187 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.589654 2 0.000119
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.589835 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.609268 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.641488 2 0.000090
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.641694 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.660478 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.700718 2 0.000072
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.700952 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] lb MIN local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=-1 lpr=54 pi=[45,54)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.721979 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.772552 3 0.001298
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.773787 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000065 1 0.000094
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.899366 3 0.000076
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.899401 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000052 1 0.000086
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.139312 2 0.000121
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.139430 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 1.929515 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.981042 3 0.000597
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.981061 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000032 1 0.000042
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 51) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:39:54.425499+0000 osd.1 (osd.1) 50 : cluster [DBG] 2.6 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:39:54.439558+0000 osd.1 (osd.1) 51 : cluster [DBG] 2.6 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:26.084111+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.047398 3 0.000115
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 1.047425 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000050 1 0.000049
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.154454 2 0.000228
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.154548 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 2.069804 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.094972 2 0.000125
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.095036 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 2.090860 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.035956 2 0.000092
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.036037 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=-1 lpr=54 pi=[50,54)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 2.099071 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 155648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:27.084243+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 56 handle_osd_map epochs [56,56], i have 56, src has [1,56]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=0 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=0 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000018
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000610 1 0.000039
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=0 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000023 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=0 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000010
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000024
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.000564 2 0.000051
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetLog 0.000498 2 0.000032
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 56 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 114688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.853458 2 0.000038
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 6.782510 16 0.000085
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.854683 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 7.118504 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 8.038438 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 8.038452 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 lc 33'16 (0'0,37'39] local-lis/les=56/57 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884973526s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active pruub 106.880828857s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884924889s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880828857s@ mbc={}] exit Reset 0.000072 1 0.000100
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884924889s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880828857s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884924889s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880828857s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884924889s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880828857s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884924889s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880828857s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884924889s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880828857s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 6.916795 16 0.000111
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 7.118851 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 8.039257 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 8.039377 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884659767s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active pruub 106.880821228s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884625435s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880821228s@ mbc={}] exit Reset 0.000054 1 0.000075
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884625435s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880821228s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884625435s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880821228s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884625435s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880821228s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884625435s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880821228s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.884625435s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 106.880821228s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.854134 2 0.000042
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering 0.854720 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 0'0 unknown m=4 mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=56/57 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 lc 33'16 (0'0,37'39] local-lis/les=56/57 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 lc 33'16 (0'0,37'39] local-lis/les=56/57 n=1 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002890 3 0.000108
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 lc 33'16 (0'0,37'39] local-lis/les=56/57 n=1 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 lc 33'16 (0'0,37'39] local-lis/les=56/57 n=1 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.002005 1 0.000059
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 lc 33'16 (0'0,37'39] local-lis/les=56/57 n=1 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 lc 33'16 (0'0,37'39] local-lis/les=56/57 n=1 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 lc 33'16 (0'0,37'39] local-lis/les=56/57 n=1 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 57 handle_osd_map epochs [57,57], i have 57, src has [1,57]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 57 handle_osd_map epochs [57,57], i have 57, src has [1,57]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=56/57 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=56/57 n=2 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.004541 4 0.000076
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=56/57 n=2 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=56/57 n=1 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.008149 3 0.000033
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=56/57 n=1 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=56/57 n=1 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=56/57 n=1 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=56/57 n=2 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.007852 2 0.000015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=56/57 n=2 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=56/57 n=2 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 lc 33'15 (0'0,37'39] local-lis/les=56/57 n=2 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:28.084340+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 114688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=56/57 n=2 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.251306 1 0.000039
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=56/57 n=2 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=56/57 n=2 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=56/57 n=2 ec=43/21 lis/c=56/43 les/c/f=57/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.005968 6 0.000086
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006088 6 0.000248
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:29.084424+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.066391 3 0.000037
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.066420 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000063 1 0.000079
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.192515 3 0.000056
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.192541 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000042 1 0.000060
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.132264 2 0.000124
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.132383 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 1.204835 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.028369 2 0.000127
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.028436 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=-1 lpr=57 pi=[50,57)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 1.227263 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 98304 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:30.084516+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 57344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 629368 data_alloc: 218103808 data_used: 110592
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 58 heartbeat osd_stat(store_statfs(0x4fcafb000/0x0/0x4ffc00000, data 0xbd304/0x120000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 15.392006 38 0.000077
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 15.398884 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 15.398971 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 15.398994 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607535362s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.925239563s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607491493s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925239563s@ mbc={}] exit Reset 0.000075 1 0.000119
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607491493s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925239563s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607491493s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925239563s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607491493s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925239563s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607491493s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925239563s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607491493s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925239563s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 15.392532 38 0.000055
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 15.386399 38 0.000063
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 15.392986 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 15.394799 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 15.395009 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 15.396965 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 15.392390 38 0.000069
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 15.395617 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 15.395653 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 15.395672 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607428551s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.926414490s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607377052s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.926414490s@ mbc={}] exit Reset 0.000077 1 0.000130
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607377052s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.926414490s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607377052s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.926414490s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607377052s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.926414490s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607377052s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.926414490s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.607377052s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.926414490s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 15.397523 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 15.397571 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.613403320s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.932228088s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.612406731s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.932228088s@ mbc={}] exit Reset 0.001031 1 0.001125
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.605746269s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 108.925514221s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.605547905s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925514221s@ mbc={}] exit Reset 0.000550 1 0.001351
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.612406731s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.932228088s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.612406731s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.932228088s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.605547905s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925514221s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.612406731s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.932228088s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.612406731s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.932228088s@ mbc={}] exit Start 0.000110 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.605547905s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925514221s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.605547905s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925514221s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.605547905s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925514221s@ mbc={}] exit Start 0.000181 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.612406731s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.932228088s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 59 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=8.605547905s) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.925514221s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.557191 3 0.000033
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.557225 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000752 1 0.000779
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.555647 3 0.000467
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.555965 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000038 1 0.000057
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.555929 3 0.000449
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.556137 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000027 1 0.000037
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.557238 3 0.000035
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.557263 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=59) [2] r=-1 lpr=59 pi=[45,59)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000033 1 0.000045
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003729 2 0.000042
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 60 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 60 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003160 2 0.000024
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000068 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000058 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004739 2 0.000026
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004669 2 0.000025
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000027 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000043 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 60 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:31.084609+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1032192 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998603 3 0.000148
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002484 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997656 3 0.000050
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002446 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997640 3 0.000190
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002466 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999047 3 0.000143
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002350 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 61 handle_osd_map epochs [60,61], i have 61, src has [1,61]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:32.084700+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:01.442410+0000 osd.1 (osd.1) 52 : cluster [DBG] 5.1 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:01.456547+0000 osd.1 (osd.1) 53 : cluster [DBG] 5.1 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1024000 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.563009 5 0.000164
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000078 1 0.000074
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000523 1 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.564597 5 0.000121
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.565007 5 0.000122
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.564635 5 0.000213
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.042440 2 0.000043
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.041640 1 0.000033
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000346 1 0.000029
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038427 2 0.000045
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.080280 1 0.000013
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000419 1 0.000040
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031278 2 0.000040
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.111963 1 0.000041
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000355 1 0.000038
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038379 2 0.000031
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 61 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 53) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:01.442410+0000 osd.1 (osd.1) 52 : cluster [DBG] 5.1 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:01.456547+0000 osd.1 (osd.1) 53 : cluster [DBG] 5.1 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:33.084786+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 61 handle_osd_map epochs [62,62], i have 62, src has [1,62]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 61 handle_osd_map epochs [62,62], i have 62, src has [1,62]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.432117 1 0.000061
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.147601 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.541506 1 0.000078
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.149963 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.147717 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.150195 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.150217 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.470931 1 0.000201
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.148114 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.150630 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.150648 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415146828s) [2] async=[2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 118.441520691s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415066719s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.441520691s@ mbc={}] exit Reset 0.000101 1 0.000180
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415066719s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.441520691s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415066719s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.441520691s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415066719s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.441520691s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415066719s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.441520691s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415066719s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.441520691s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.502686 1 0.000191
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.148129 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.150589 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.150604 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.150196 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=60) [2]/[1] async=[2] r=0 lpr=60 pi=[45,60)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416400909s) [2] async=[2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 118.443046570s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416225433s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443046570s@ mbc={}] exit Reset 0.000192 1 0.000526
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416225433s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443046570s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416225433s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443046570s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416225433s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443046570s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416225433s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443046570s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416225433s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443046570s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416756630s) [2] async=[2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 118.443283081s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.416030884s) [2] async=[2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 118.443237305s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415893555s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443283081s@ mbc={}] exit Reset 0.000988 1 0.001092
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415829659s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443237305s@ mbc={}] exit Reset 0.000691 1 0.001141
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415893555s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443283081s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415829659s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443237305s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415893555s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443283081s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415829659s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443237305s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415893555s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443283081s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415893555s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443283081s@ mbc={}] exit Start 0.000118 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415829659s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443237305s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415829659s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443237305s@ mbc={}] exit Start 0.000157 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-mon[74966]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:09 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/383441113' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 26 12:58:09 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/598505242' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 26 12:58:09 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/922276148' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415893555s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443283081s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3015517953' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2884655991' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 62 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62 pruub=15.415829659s) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.443237305s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/640217898' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fcaf2000/0x0/0x4ffc00000, data 0xc289b/0x129000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 1097728 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fcaf1000/0x0/0x4ffc00000, data 0xc4322/0x12c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:34.084938+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 1097728 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _renew_subs
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 62 handle_osd_map epochs [63,63], i have 62, src has [1,63]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.907162666s of 10.024430275s, submitted: 122
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.361202 6 0.000969
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.362973 6 0.000217
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.361825 6 0.000944
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.362617 6 0.000046
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000632 1 0.000039
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000786 2 0.000059
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000782 2 0.000162
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000811 2 0.000286
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] lb MIN local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 DELETING pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.060575 3 0.000177
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] lb MIN local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.061270 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.1e( v 44'389 (0'0,44'389] lb MIN local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.423106 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] lb MIN local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 DELETING pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.089846 2 0.000083
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] lb MIN local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.090663 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.16( v 44'389 (0'0,44'389] lb MIN local-lis/les=60/61 n=5 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.452771 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] lb MIN local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 DELETING pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.126769 2 0.000136
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] lb MIN local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.127583 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.e( v 44'389 (0'0,44'389] lb MIN local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.490365 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] lb MIN local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 DELETING pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.171215 2 0.000074
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] lb MIN local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.172206 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 63 pg[9.6( v 44'389 (0'0,44'389] lb MIN local-lis/les=60/61 n=6 ec=45/34 lis/c=60/45 les/c/f=61/46/0 sis=62) [2] r=-1 lpr=62 pi=[45,62)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.535233 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:35.085045+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 1081344 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 608303 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:36.085183+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 1081344 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 63 handle_osd_map epochs [64,65], i have 63, src has [1,65]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:37.085320+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:38.085455+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:07.463677+0000 osd.1 (osd.1) 54 : cluster [DBG] 2.9 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:07.477731+0000 osd.1 (osd.1) 55 : cluster [DBG] 2.9 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 55) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:07.463677+0000 osd.1 (osd.1) 54 : cluster [DBG] 2.9 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:07.477731+0000 osd.1 (osd.1) 55 : cluster [DBG] 2.9 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:39.085620+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fcae6000/0x0/0x4ffc00000, data 0xcac92/0x136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:40.085727+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 25.128766 62 0.000109
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 25.132535 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 25.132621 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 25.132639 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.871047974s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 124.926139832s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.870986938s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.926139832s@ mbc={}] exit Reset 0.000102 1 0.000152
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.870986938s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.926139832s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.870986938s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.926139832s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.870986938s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.926139832s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.870986938s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.926139832s@ mbc={}] exit Start 0.000015 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.870986938s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.926139832s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 25.126984 62 0.000101
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 25.130258 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 25.130862 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 25.130881 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.873212814s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 124.928573608s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.873172760s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.928573608s@ mbc={}] exit Reset 0.000065 1 0.000098
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.873172760s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.928573608s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.873172760s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.928573608s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.873172760s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.928573608s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.873172760s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.928573608s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 67 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=14.873172760s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.928573608s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 621466 data_alloc: 218103808 data_used: 135168
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.822968 3 0.000064
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.823023 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000062 1 0.000090
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.823163 3 0.000045
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.823191 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000031 1 0.000047
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002141 2 0.000023
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003375 2 0.000033
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 68 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:41.085855+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001834 3 0.000068
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.005300 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 20.995835 52 0.000265
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 21.004368 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 21.927377 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 21.927512 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=50) [1] r=0 lpr=50 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69 pruub=11.003540039s) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 122.887435913s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69 pruub=11.003467560s) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.887435913s@ mbc={}] exit Reset 0.000346 1 0.000391
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69 pruub=11.003467560s) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.887435913s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69 pruub=11.003467560s) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.887435913s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69 pruub=11.003467560s) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.887435913s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69 pruub=11.003467560s) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.887435913s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69 pruub=11.003467560s) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.887435913s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003520 3 0.000111
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.005775 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 69 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:42.085952+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 69 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 69 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.673246 5 0.000145
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000080 1 0.000080
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000326 1 0.000031
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.675174 5 0.000300
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035155 2 0.000047
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.034508 1 0.000038
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000295 1 0.000035
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052545 2 0.000029
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 69 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.002526 6 0.000374
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.241058 1 0.000046
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003739 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.009053 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.009076 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.671307564s) [2] async=[2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 128.558670044s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000734 2 0.000041
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.294269 1 0.000055
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003190 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.008981 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.008996 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[45,68)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.669959068s) [2] async=[2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 128.557617188s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.669912338s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.557617188s@ mbc={}] exit Reset 0.000069 1 0.000102
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.669912338s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.557617188s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.669912338s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.557617188s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.669912338s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.557617188s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.669912338s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.557617188s@ mbc={}] exit Start 0.000007 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.669912338s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.557617188s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.670838356s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.558670044s@ mbc={}] exit Reset 0.000489 1 0.000525
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.670838356s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.558670044s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.670838356s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.558670044s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.670838356s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.558670044s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.670838356s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.558670044s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70 pruub=15.670838356s) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.558670044s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 70 handle_osd_map epochs [70,70], i have 70, src has [1,70]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=-1 lpr=69 DELETING pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.003456 1 0.000107
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.004231 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] lb MIN local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=-1 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.007057 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:43.086053+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:12.498334+0000 osd.1 (osd.1) 56 : cluster [DBG] 5.f scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:12.512447+0000 osd.1 (osd.1) 57 : cluster [DBG] 5.f scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 57) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:12.498334+0000 osd.1 (osd.1) 56 : cluster [DBG] 5.f scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:12.512447+0000 osd.1 (osd.1) 57 : cluster [DBG] 5.f scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 70 heartbeat osd_stat(store_statfs(0x4fcade000/0x0/0x4ffc00000, data 0xd0193/0x13f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:44.086263+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _renew_subs
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.029497147s of 10.084000587s, submitted: 45
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.585549 6 0.000159
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.585442 6 0.000079
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000515 1 0.000051
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000887 2 0.000041
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] lb MIN local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 DELETING pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.039038 3 0.000153
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] lb MIN local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.039582 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.18( v 44'389 (0'0,44'389] lb MIN local-lis/les=68/69 n=5 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.625174 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] lb MIN local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 DELETING pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.090738 2 0.000136
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] lb MIN local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.091660 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 71 pg[9.8( v 44'389 (0'0,44'389] lb MIN local-lis/les=68/69 n=6 ec=45/34 lis/c=68/45 les/c/f=69/46/0 sis=70) [2] r=-1 lpr=70 pi=[45,70)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.677135 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:45.086367+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 1916928 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 613177 data_alloc: 218103808 data_used: 126976
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 71 heartbeat osd_stat(store_statfs(0x4fcada000/0x0/0x4ffc00000, data 0xd1bf8/0x142000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 71 heartbeat osd_stat(store_statfs(0x4fcada000/0x0/0x4ffc00000, data 0xd354f/0x143000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:46.086808+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 1908736 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:47.086905+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:16.550086+0000 osd.1 (osd.1) 58 : cluster [DBG] 5.c scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:16.564170+0000 osd.1 (osd.1) 59 : cluster [DBG] 5.c scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 1908736 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 59) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:16.550086+0000 osd.1 (osd.1) 58 : cluster [DBG] 5.c scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:16.564170+0000 osd.1 (osd.1) 59 : cluster [DBG] 5.c scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:48.087018+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 1908736 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:49.087112+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 1900544 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:50.087214+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 1900544 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 614324 data_alloc: 218103808 data_used: 126976
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:51.087319+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 71 heartbeat osd_stat(store_statfs(0x4fcada000/0x0/0x4ffc00000, data 0xd354f/0x143000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=52) [1] r=0 lpr=52 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 28.252022 56 0.000689
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=52) [1] r=0 lpr=52 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 28.256064 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=52) [1] r=0 lpr=52 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 28.403987 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=52) [1] r=0 lpr=52 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 28.404005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=52) [1] r=0 lpr=52 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=11.746877670s) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 132.966613770s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=11.746830940s) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.966613770s@ mbc={}] exit Reset 0.000073 1 0.000109
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=11.746830940s) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.966613770s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=11.746830940s) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.966613770s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=11.746830940s) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.966613770s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=11.746830940s) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.966613770s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 72 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=11.746830940s) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.966613770s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 1884160 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:52.087412+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 1884160 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _renew_subs
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.235894 6 0.000065
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000482 2 0.000078
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] lb MIN local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=-1 lpr=72 DELETING pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.012091 1 0.000118
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] lb MIN local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.012640 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] lb MIN local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=-1 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.248594 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:53.087510+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:22.519652+0000 osd.1 (osd.1) 60 : cluster [DBG] 5.1a scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:22.541086+0000 osd.1 (osd.1) 61 : cluster [DBG] 5.1a scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 61) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:22.519652+0000 osd.1 (osd.1) 60 : cluster [DBG] 5.1a scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:22.541086+0000 osd.1 (osd.1) 61 : cluster [DBG] 5.1a scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 1875968 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:54.087782+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 1875968 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:55.087874+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:24.497898+0000 osd.1 (osd.1) 62 : cluster [DBG] 5.18 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:24.512032+0000 osd.1 (osd.1) 63 : cluster [DBG] 5.18 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 63) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:24.497898+0000 osd.1 (osd.1) 62 : cluster [DBG] 5.18 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:24.512032+0000 osd.1 (osd.1) 63 : cluster [DBG] 5.18 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 1867776 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 620330 data_alloc: 218103808 data_used: 126976
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:56.087992+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 1867776 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.882926941s of 11.908220291s, submitted: 43
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:57.088094+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:26.442186+0000 osd.1 (osd.1) 64 : cluster [DBG] 5.1d scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:26.456220+0000 osd.1 (osd.1) 65 : cluster [DBG] 5.1d scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 65) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:26.442186+0000 osd.1 (osd.1) 64 : cluster [DBG] 5.1d scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:26.456220+0000 osd.1 (osd.1) 65 : cluster [DBG] 5.1d scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcad5000/0x0/0x4ffc00000, data 0xd6c9a/0x149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 1851392 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:58.088422+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcad5000/0x0/0x4ffc00000, data 0xd6c9a/0x149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 1810432 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:59.088567+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:28.435289+0000 osd.1 (osd.1) 66 : cluster [DBG] 2.7 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:28.449194+0000 osd.1 (osd.1) 67 : cluster [DBG] 2.7 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 67) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:28.435289+0000 osd.1 (osd.1) 66 : cluster [DBG] 2.7 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:28.449194+0000 osd.1 (osd.1) 67 : cluster [DBG] 2.7 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 1802240 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=0 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=0 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000027
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000124 1 0.000040
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.000753 2 0.000249
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 74 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:00.088713+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 74 handle_osd_map epochs [74,75], i have 75, src has [1,75]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 45.157257 88 0.000131
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 45.162503 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 45.162551 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 45.162575 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.842657089s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 140.926010132s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.842606544s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926010132s@ mbc={}] exit Reset 0.000085 1 0.000132
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.842606544s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926010132s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.842606544s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926010132s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.842606544s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926010132s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.842606544s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926010132s@ mbc={}] exit Start 0.000007 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.842606544s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926010132s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.380002 2 0.000049
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.380924 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=74/75 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 45.157196 88 0.000127
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 45.160102 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 45.160207 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 45.160222 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=44'389 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.843101501s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 140.926895142s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.843066216s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926895142s@ mbc={}] exit Reset 0.000051 1 0.000073
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.843066216s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926895142s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.843066216s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926895142s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.843066216s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926895142s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.843066216s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926895142s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=10.843066216s) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.926895142s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=74/75 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=74/75 n=1 ec=43/21 lis/c=74/54 les/c/f=75/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002610 3 0.000086
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=74/75 n=1 ec=43/21 lis/c=74/54 les/c/f=75/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=74/75 n=1 ec=43/21 lis/c=74/54 les/c/f=75/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000111 1 0.000099
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=74/75 n=1 ec=43/21 lis/c=74/54 les/c/f=75/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=74/75 n=1 ec=43/21 lis/c=74/54 les/c/f=75/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=74/75 n=1 ec=43/21 lis/c=74/54 les/c/f=75/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 75 handle_osd_map epochs [75,75], i have 75, src has [1,75]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 75 handle_osd_map epochs [75,75], i have 75, src has [1,75]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=74/75 n=1 ec=43/21 lis/c=74/54 les/c/f=75/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.009411 3 0.000037
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=74/75 n=1 ec=43/21 lis/c=74/54 les/c/f=75/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=74/75 n=1 ec=43/21 lis/c=74/54 les/c/f=75/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=74/75 n=1 ec=43/21 lis/c=74/54 les/c/f=75/55/0 sis=74) [1] r=0 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 1802240 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 632901 data_alloc: 218103808 data_used: 143360
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.802236 3 0.000042
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.802295 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.801883 3 0.000025
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.801947 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=75) [2] r=-1 lpr=75 pi=[45,75)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000054 1 0.000118
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000022 1 0.000027
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000597 1 0.000666
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000081 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000254
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 76 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:01.088822+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 1826816 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 76 handle_osd_map epochs [76,77], i have 77, src has [1,77]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997375 4 0.000102
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.997520 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998678 4 0.000062
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.998777 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=45/46 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:02.088918+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.159259 5 0.000207
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000102 1 0.000027
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000327 1 0.000019
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.161027 5 0.000210
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.049490 2 0.000026
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.048749 1 0.000049
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000200 1 0.000026
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038527 2 0.000055
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 1810432 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 77 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xdc169/0x152000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=0 pi=[57,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=0 pi=[57,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000073 1 0.000034
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000540 2 0.000037
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 77 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:03.089017+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 77 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.965772 1 0.000045
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.214453 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.211997 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.212195 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 1.004562 1 0.000247
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.946485519s) [2] async=[2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 148.045059204s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.946434021s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.045059204s@ mbc={}] exit Reset 0.000080 1 0.000114
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.946434021s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.045059204s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.946434021s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.045059204s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.946434021s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.045059204s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.946434021s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.045059204s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.946434021s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.045059204s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.777578 2 0.000038
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.778218 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.213914 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.213035 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.213079 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[45,76)/1 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.944923401s) [2] async=[2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 active pruub 148.043884277s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.944789886s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.043884277s@ mbc={}] exit Reset 0.000163 1 0.000588
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.944789886s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.043884277s@ mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.944789886s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.043884277s@ mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.944789886s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.043884277s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.944789886s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.043884277s@ mbc={}] exit Start 0.000080 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78 pruub=14.944789886s) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.043884277s@ mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=77/78 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=77/78 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=77/78 n=1 ec=43/21 lis/c=77/57 les/c/f=78/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.005993 3 0.000912
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=77/78 n=1 ec=43/21 lis/c=77/57 les/c/f=78/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=77/78 n=1 ec=43/21 lis/c=77/57 les/c/f=78/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000121 2 0.000521
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=77/78 n=1 ec=43/21 lis/c=77/57 les/c/f=78/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=77/78 n=1 ec=43/21 lis/c=77/57 les/c/f=78/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=77/78 n=1 ec=43/21 lis/c=77/57 les/c/f=78/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=77/78 n=1 ec=43/21 lis/c=77/57 les/c/f=78/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.066896 2 0.000038
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=77/78 n=1 ec=43/21 lis/c=77/57 les/c/f=78/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=77/78 n=1 ec=43/21 lis/c=77/57 les/c/f=78/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=77/78 n=1 ec=43/21 lis/c=77/57 les/c/f=78/58/0 sis=77) [1] r=0 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 1761280 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 78 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xdf95e/0x159000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:04.089141+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:33.453143+0000 osd.1 (osd.1) 68 : cluster [DBG] 5.19 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:33.467288+0000 osd.1 (osd.1) 69 : cluster [DBG] 5.19 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 69) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:33.453143+0000 osd.1 (osd.1) 68 : cluster [DBG] 5.19 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:33.467288+0000 osd.1 (osd.1) 69 : cluster [DBG] 5.19 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.102656 6 0.000197
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.103285 6 0.000070
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000588 2 0.000108
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000658 2 0.000148
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] lb MIN local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 DELETING pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.045690 2 0.000126
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] lb MIN local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.046309 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.c( v 44'389 (0'0,44'389] lb MIN local-lis/les=76/77 n=6 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.149624 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] lb MIN local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 DELETING pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.097549 2 0.000115
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] lb MIN local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.098249 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 79 pg[9.1c( v 44'389 (0'0,44'389] lb MIN local-lis/les=76/77 n=5 ec=45/34 lis/c=76/45 les/c/f=77/46/0 sis=78) [2] r=-1 lpr=78 pi=[45,78)/1 crt=44'389 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.201057 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 1703936 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.8 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.8 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:05.089277+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:34.493619+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.8 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:34.507750+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.8 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 71) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:34.493619+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.8 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:34.507750+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.8 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 1703936 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 633795 data_alloc: 218103808 data_used: 147456
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 79 heartbeat osd_stat(store_statfs(0x4fcac1000/0x0/0x4ffc00000, data 0xe1323/0x15b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:06.089431+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 1703936 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:07.089563+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 1695744 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.029020309s of 11.088321686s, submitted: 46
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:08.089700+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:37.530486+0000 osd.1 (osd.1) 72 : cluster [DBG] 4.5 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:37.544612+0000 osd.1 (osd.1) 73 : cluster [DBG] 4.5 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 73) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:37.530486+0000 osd.1 (osd.1) 72 : cluster [DBG] 4.5 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:37.544612+0000 osd.1 (osd.1) 73 : cluster [DBG] 4.5 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 1695744 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:09.089853+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:38.529175+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.7 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:38.543311+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.7 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 75) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:38.529175+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.7 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:38.543311+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.7 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 79 heartbeat osd_stat(store_statfs(0x4fcac3000/0x0/0x4ffc00000, data 0xe1323/0x15b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 1687552 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 79 heartbeat osd_stat(store_statfs(0x4fcac3000/0x0/0x4ffc00000, data 0xe1323/0x15b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:10.090052+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 1671168 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 634361 data_alloc: 218103808 data_used: 147456
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:11.090184+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 1671168 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:12.090319+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:41.528605+0000 osd.1 (osd.1) 76 : cluster [DBG] 4.2 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:41.542697+0000 osd.1 (osd.1) 77 : cluster [DBG] 4.2 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 77) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:41.528605+0000 osd.1 (osd.1) 76 : cluster [DBG] 4.2 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:41.542697+0000 osd.1 (osd.1) 77 : cluster [DBG] 4.2 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 1662976 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:13.090478+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 1662976 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 79 heartbeat osd_stat(store_statfs(0x4fcac3000/0x0/0x4ffc00000, data 0xe1323/0x15b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 79 handle_osd_map epochs [80,82], i have 79, src has [1,82]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 79 handle_osd_map epochs [80,82], i have 82, src has [1,82]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:14.090610+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fcac3000/0x0/0x4ffc00000, data 0xe1323/0x15b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 1556480 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:15.090723+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 1556480 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 650324 data_alloc: 218103808 data_used: 155648
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:16.090867+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 1556480 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:17.090992+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 1548288 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.989352226s of 10.001952171s, submitted: 19
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 84 heartbeat osd_stat(store_statfs(0x4fcab5000/0x0/0x4ffc00000, data 0xe8303/0x167000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:18.091120+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:47.532555+0000 osd.1 (osd.1) 78 : cluster [DBG] 4.4 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:47.546696+0000 osd.1 (osd.1) 79 : cluster [DBG] 4.4 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 79) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:47.532555+0000 osd.1 (osd.1) 78 : cluster [DBG] 4.4 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:47.546696+0000 osd.1 (osd.1) 79 : cluster [DBG] 4.4 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 1507328 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:19.091258+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 1499136 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:20.091408+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 1490944 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 656839 data_alloc: 218103808 data_used: 167936
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fcaaf000/0x0/0x4ffc00000, data 0xeb9fd/0x16d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:21.091544+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 1531904 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:22.091665+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:51.545864+0000 osd.1 (osd.1) 80 : cluster [DBG] 4.9 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:51.559988+0000 osd.1 (osd.1) 81 : cluster [DBG] 4.9 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 81) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:51.545864+0000 osd.1 (osd.1) 80 : cluster [DBG] 4.9 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:51.559988+0000 osd.1 (osd.1) 81 : cluster [DBG] 4.9 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 1523712 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 86 heartbeat osd_stat(store_statfs(0x4fcaad000/0x0/0x4ffc00000, data 0xed57a/0x170000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 86 handle_osd_map epochs [87,88], i have 86, src has [1,88]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 86 handle_osd_map epochs [87,88], i have 88, src has [1,88]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:23.091785+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:52.551800+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.f scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:52.565918+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.f scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 83) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:52.551800+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.f scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:52.565918+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.f scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 1474560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:24.091961+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1417216 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 89 heartbeat osd_stat(store_statfs(0x4fcaa2000/0x0/0x4ffc00000, data 0xf2473/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 89 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:25.092110+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 90 heartbeat osd_stat(store_statfs(0x4fcaa2000/0x0/0x4ffc00000, data 0xf2473/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 1712128 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 673321 data_alloc: 218103808 data_used: 167936
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:26.092243+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 1703936 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:27.092374+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 1703936 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.080651283s of 10.109132767s, submitted: 55
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:28.092529+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:57.641572+0000 osd.1 (osd.1) 84 : cluster [DBG] 4.d scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:57.655685+0000 osd.1 (osd.1) 85 : cluster [DBG] 4.d scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 85) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:57.641572+0000 osd.1 (osd.1) 84 : cluster [DBG] 4.d scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:57.655685+0000 osd.1 (osd.1) 85 : cluster [DBG] 4.d scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 1703936 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:29.092699+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:58.686341+0000 osd.1 (osd.1) 86 : cluster [DBG] 4.10 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:58.700386+0000 osd.1 (osd.1) 87 : cluster [DBG] 4.10 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 87) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:58.686341+0000 osd.1 (osd.1) 86 : cluster [DBG] 4.10 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:58.700386+0000 osd.1 (osd.1) 87 : cluster [DBG] 4.10 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 1695744 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 90 heartbeat osd_stat(store_statfs(0x4fcaa2000/0x0/0x4ffc00000, data 0xf3ea6/0x17c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:30.092840+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:59.681810+0000 osd.1 (osd.1) 88 : cluster [DBG] 4.12 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:40:59.695905+0000 osd.1 (osd.1) 89 : cluster [DBG] 4.12 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 89) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:59.681810+0000 osd.1 (osd.1) 88 : cluster [DBG] 4.12 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:40:59.695905+0000 osd.1 (osd.1) 89 : cluster [DBG] 4.12 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 1695744 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 676044 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:31.092994+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 1687552 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:32.093139+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:01.682165+0000 osd.1 (osd.1) 90 : cluster [DBG] 4.14 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:01.696242+0000 osd.1 (osd.1) 91 : cluster [DBG] 4.14 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 91) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:01.682165+0000 osd.1 (osd.1) 90 : cluster [DBG] 4.14 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:01.696242+0000 osd.1 (osd.1) 91 : cluster [DBG] 4.14 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 1646592 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 91 heartbeat osd_stat(store_statfs(0x4fca9e000/0x0/0x4ffc00000, data 0xf5a23/0x17f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:33.093300+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:02.639310+0000 osd.1 (osd.1) 92 : cluster [DBG] 7.7 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:02.653437+0000 osd.1 (osd.1) 93 : cluster [DBG] 7.7 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 93) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:02.639310+0000 osd.1 (osd.1) 92 : cluster [DBG] 7.7 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:02.653437+0000 osd.1 (osd.1) 93 : cluster [DBG] 7.7 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 1597440 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:34.093486+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=0 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=0 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000218
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000114 1 0.000040
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000149 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 1589248 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:35.093581+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 93 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001706 2 0.000046
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.001884 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.001912 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=0 lpr=93 pi=[54,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000322 1 0.000376
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000082 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 1581056 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694605 data_alloc: 218103808 data_used: 192512
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:36.093679+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:05.600581+0000 osd.1 (osd.1) 94 : cluster [DBG] 7.b scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:05.614442+0000 osd.1 (osd.1) 95 : cluster [DBG] 7.b scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 95) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:05.600581+0000 osd.1 (osd.1) 94 : cluster [DBG] 7.b scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:05.614442+0000 osd.1 (osd.1) 95 : cluster [DBG] 7.b scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 94 heartbeat osd_stat(store_statfs(0x4fca93000/0x0/0x4ffc00000, data 0xfabbc/0x188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 94 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 95 pg[9.15( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.087932 5 0.000174
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 95 pg[9.15( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 95 pg[9.15( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 95 pg[9.15( v 44'389 lc 40'141 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003130 4 0.000096
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 95 pg[9.15( v 44'389 lc 40'141 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 95 pg[9.15( v 44'389 lc 40'141 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000061 1 0.000035
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 95 pg[9.15( v 44'389 lc 40'141 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028536 1 0.000055
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 466944 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:37.093781+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.898535 1 0.000031
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.930367 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.018431 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000060 1 0.000101
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000024 1 0.000028
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001538 3 0.000030
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 458752 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.d deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.d deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:38.093870+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:07.580395+0000 osd.1 (osd.1) 96 : cluster [DBG] 7.d deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:07.594532+0000 osd.1 (osd.1) 97 : cluster [DBG] 7.d deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.633497238s of 10.676719666s, submitted: 47
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002368 2 0.000063
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003982 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 97) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:07.580395+0000 osd.1 (osd.1) 96 : cluster [DBG] 7.d deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:07.594532+0000 osd.1 (osd.1) 97 : cluster [DBG] 7.d deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=96/54 les/c/f=97/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002064 4 0.000401
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=96/54 les/c/f=97/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=96/54 les/c/f=97/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=96/54 les/c/f=97/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:39.093982+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 450560 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:40.094072+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 385024 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 713958 data_alloc: 218103808 data_used: 200704
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:41.094169+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 376832 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 98 heartbeat osd_stat(store_statfs(0x4fca88000/0x0/0x4ffc00000, data 0x101806/0x195000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:42.094263+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 376832 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:43.094374+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 368640 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:44.094487+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 360448 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:45.094638+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 99 handle_osd_map epochs [100,101], i have 99, src has [1,101]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 352256 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 723406 data_alloc: 218103808 data_used: 200704
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:46.094744+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 344064 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:47.094863+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:16.597312+0000 osd.1 (osd.1) 98 : cluster [DBG] 7.10 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:16.611443+0000 osd.1 (osd.1) 99 : cluster [DBG] 7.10 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 99) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:16.597312+0000 osd.1 (osd.1) 98 : cluster [DBG] 7.10 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:16.611443+0000 osd.1 (osd.1) 99 : cluster [DBG] 7.10 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 327680 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fca77000/0x0/0x4ffc00000, data 0x109f53/0x1a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:48.094997+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.033578873s of 10.053858757s, submitted: 14
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 294912 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:49.095099+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 286720 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:50.095203+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:19.621340+0000 osd.1 (osd.1) 100 : cluster [DBG] 7.12 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:19.634972+0000 osd.1 (osd.1) 101 : cluster [DBG] 7.12 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 101) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:19.621340+0000 osd.1 (osd.1) 100 : cluster [DBG] 7.12 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:19.634972+0000 osd.1 (osd.1) 101 : cluster [DBG] 7.12 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 278528 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 737742 data_alloc: 218103808 data_used: 200704
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:51.095338+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 270336 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:52.095432+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 270336 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fca72000/0x0/0x4ffc00000, data 0x10d535/0x1aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 105 handle_osd_map epochs [106,108], i have 105, src has [1,108]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 105 handle_osd_map epochs [106,108], i have 108, src has [1,108]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:53.095525+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 155648 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:54.095638+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 155648 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:55.095749+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:24.531360+0000 osd.1 (osd.1) 102 : cluster [DBG] 7.14 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:24.545476+0000 osd.1 (osd.1) 103 : cluster [DBG] 7.14 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 147456 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746420 data_alloc: 218103808 data_used: 204800
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 103) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:24.531360+0000 osd.1 (osd.1) 102 : cluster [DBG] 7.14 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:24.545476+0000 osd.1 (osd.1) 103 : cluster [DBG] 7.14 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:56.095897+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 108 heartbeat osd_stat(store_statfs(0x4fca6b000/0x0/0x4ffc00000, data 0x11254e/0x1b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 147456 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:57.095992+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 139264 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:58.096105+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.083814621s of 10.094331741s, submitted: 9
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 155648 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fca67000/0x0/0x4ffc00000, data 0x1140cb/0x1b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:59.096199+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 147456 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f(unlocked)] enter Initial
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=0 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000031 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=0 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000016
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000075 1 0.000034
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000108 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:00.096365+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 106496 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 757896 data_alloc: 218103808 data_used: 212992
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 111 handle_osd_map epochs [111,112], i have 112, src has [1,112]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.965890 2 0.000040
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.966049 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.966097 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=111) [1] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000158 1 0.000253
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000044 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 112 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:01.096491+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 98304 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 112 heartbeat osd_stat(store_statfs(0x4fca61000/0x0/0x4ffc00000, data 0x1176ce/0x1bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:02.096609+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:31.704640+0000 osd.1 (osd.1) 104 : cluster [DBG] 7.16 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:31.718659+0000 osd.1 (osd.1) 105 : cluster [DBG] 7.16 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.517999 5 0.000158
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=65/65 les/c/f=66/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003836 4 0.000283
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000080 1 0.000067
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 81920 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035888 1 0.000043
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 113 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 105) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:31.704640+0000 osd.1 (osd.1) 104 : cluster [DBG] 7.16 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:31.718659+0000 osd.1 (osd.1) 105 : cluster [DBG] 7.16 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.495846 1 0.000024
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.535754 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.053897 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[65,112)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000041 1 0.000071
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000023 1 0.000027
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 12:58:09 compute-0 ceph-osd[89328]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001073 3 0.000030
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 114 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:03.096734+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:32.741286+0000 osd.1 (osd.1) 106 : cluster [DBG] 7.17 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:32.755232+0000 osd.1 (osd.1) 107 : cluster [DBG] 7.17 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 107) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:32.741286+0000 osd.1 (osd.1) 106 : cluster [DBG] 7.17 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:32.755232+0000 osd.1 (osd.1) 107 : cluster [DBG] 7.17 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 114 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000024 2 0.000122
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001205 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=114/115 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=114/115 n=5 ec=45/34 lis/c=112/65 les/c/f=113/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=114/115 n=5 ec=45/34 lis/c=114/65 les/c/f=115/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001331 4 0.000083
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=114/115 n=5 ec=45/34 lis/c=114/65 les/c/f=115/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=114/115 n=5 ec=45/34 lis/c=114/65 les/c/f=115/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 pg_epoch: 115 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=114/115 n=5 ec=45/34 lis/c=114/65 les/c/f=115/66/0 sis=114) [1] r=0 lpr=114 pi=[65,114)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:04.097006+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca57000/0x0/0x4ffc00000, data 0x11c65a/0x1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 73728 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:05.097137+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 65536 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 780793 data_alloc: 218103808 data_used: 221184
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:06.097239+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 65536 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:07.097334+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:08.097436+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:37.675110+0000 osd.1 (osd.1) 108 : cluster [DBG] 7.19 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:37.689293+0000 osd.1 (osd.1) 109 : cluster [DBG] 7.19 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 109) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:37.675110+0000 osd.1 (osd.1) 108 : cluster [DBG] 7.19 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:37.689293+0000 osd.1 (osd.1) 109 : cluster [DBG] 7.19 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:09.097562+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 57344 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:10.097665+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 49152 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 780549 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:11.097772+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 49152 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.096872330s of 13.135742188s, submitted: 41
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:12.097861+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:41.602326+0000 osd.1 (osd.1) 110 : cluster [DBG] 7.1d scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:41.616494+0000 osd.1 (osd.1) 111 : cluster [DBG] 7.1d scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 40960 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 111) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:41.602326+0000 osd.1 (osd.1) 110 : cluster [DBG] 7.1d scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:41.616494+0000 osd.1 (osd.1) 111 : cluster [DBG] 7.1d scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:13.097972+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:42.636028+0000 osd.1 (osd.1) 112 : cluster [DBG] 7.1e scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:42.650060+0000 osd.1 (osd.1) 113 : cluster [DBG] 7.1e scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 32768 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 113) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:42.636028+0000 osd.1 (osd.1) 112 : cluster [DBG] 7.1e scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:42.650060+0000 osd.1 (osd.1) 113 : cluster [DBG] 7.1e scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:14.098214+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 24576 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:15.098333+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 16384 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782845 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:16.098430+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 0 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.1 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.1 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:17.098561+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:46.608902+0000 osd.1 (osd.1) 114 : cluster [DBG] 8.1 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:46.622999+0000 osd.1 (osd.1) 115 : cluster [DBG] 8.1 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1040384 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 115) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:46.608902+0000 osd.1 (osd.1) 114 : cluster [DBG] 8.1 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:46.622999+0000 osd.1 (osd.1) 115 : cluster [DBG] 8.1 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:18.098784+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1040384 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:19.098884+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:20.099172+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1032192 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783992 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:21.099281+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1024000 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:22.099432+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1024000 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:23.099572+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1015808 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:24.099735+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1015808 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:25.099867+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1015808 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 783992 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:26.099978+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.015340805s of 15.022736549s, submitted: 6
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:27.100096+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:56.625117+0000 osd.1 (osd.1) 116 : cluster [DBG] 8.3 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:41:56.639289+0000 osd.1 (osd.1) 117 : cluster [DBG] 8.3 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 999424 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 117) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:56.625117+0000 osd.1 (osd.1) 116 : cluster [DBG] 8.3 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:41:56.639289+0000 osd.1 (osd.1) 117 : cluster [DBG] 8.3 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:28.100259+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:29.100368+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 991232 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:30.100514+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 785139 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:31.100653+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:32.100765+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 974848 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:33.101619+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 966656 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:34.101771+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:03.648150+0000 osd.1 (osd.1) 118 : cluster [DBG] 8.5 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:03.662267+0000 osd.1 (osd.1) 119 : cluster [DBG] 8.5 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 966656 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:35.101907+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 119) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:03.648150+0000 osd.1 (osd.1) 118 : cluster [DBG] 8.5 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:03.662267+0000 osd.1 (osd.1) 119 : cluster [DBG] 8.5 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 786286 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:36.101999+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.079909325s of 10.085831642s, submitted: 4
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:37.102099+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:06.710935+0000 osd.1 (osd.1) 120 : cluster [DBG] 8.7 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:06.728625+0000 osd.1 (osd.1) 121 : cluster [DBG] 8.7 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 121) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:06.710935+0000 osd.1 (osd.1) 120 : cluster [DBG] 8.7 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:06.728625+0000 osd.1 (osd.1) 121 : cluster [DBG] 8.7 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 958464 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:38.102255+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:07.676877+0000 osd.1 (osd.1) 122 : cluster [DBG] 8.8 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:07.691012+0000 osd.1 (osd.1) 123 : cluster [DBG] 8.8 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 123) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:07.676877+0000 osd.1 (osd.1) 122 : cluster [DBG] 8.8 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:07.691012+0000 osd.1 (osd.1) 123 : cluster [DBG] 8.8 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:39.102404+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 942080 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:40.102505+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 788580 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:41.102638+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.a deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.a deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:42.102742+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:11.731588+0000 osd.1 (osd.1) 124 : cluster [DBG] 8.a deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:11.745715+0000 osd.1 (osd.1) 125 : cluster [DBG] 8.a deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 125) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:11.731588+0000 osd.1 (osd.1) 124 : cluster [DBG] 8.a deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:11.745715+0000 osd.1 (osd.1) 125 : cluster [DBG] 8.a deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 925696 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.2 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.2 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:43.102911+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:12.744417+0000 osd.1 (osd.1) 126 : cluster [DBG] 9.2 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:12.776191+0000 osd.1 (osd.1) 127 : cluster [DBG] 9.2 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 127) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:12.744417+0000 osd.1 (osd.1) 126 : cluster [DBG] 9.2 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:12.776191+0000 osd.1 (osd.1) 127 : cluster [DBG] 9.2 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 917504 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:44.103287+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 917504 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:45.103393+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 909312 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 790874 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:46.103498+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 901120 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:47.103598+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 892928 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:48.103741+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 892928 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:49.103862+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 892928 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:50.103952+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 790874 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:51.104040+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 884736 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:52.104129+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 876544 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.987401962s of 15.996603966s, submitted: 8
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:53.104226+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:22.707633+0000 osd.1 (osd.1) 128 : cluster [DBG] 8.13 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:22.721692+0000 osd.1 (osd.1) 129 : cluster [DBG] 8.13 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 129) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:22.707633+0000 osd.1 (osd.1) 128 : cluster [DBG] 8.13 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:22.721692+0000 osd.1 (osd.1) 129 : cluster [DBG] 8.13 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 876544 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:54.104380+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 868352 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:55.104495+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:24.738282+0000 osd.1 (osd.1) 130 : cluster [DBG] 9.4 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:24.787736+0000 osd.1 (osd.1) 131 : cluster [DBG] 9.4 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 131) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:24.738282+0000 osd.1 (osd.1) 130 : cluster [DBG] 9.4 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:24.787736+0000 osd.1 (osd.1) 131 : cluster [DBG] 9.4 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 860160 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 793169 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:56.104648+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 860160 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:57.104831+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:26.664215+0000 osd.1 (osd.1) 132 : cluster [DBG] 8.16 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:26.678356+0000 osd.1 (osd.1) 133 : cluster [DBG] 8.16 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 133) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:26.664215+0000 osd.1 (osd.1) 132 : cluster [DBG] 8.16 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:26.678356+0000 osd.1 (osd.1) 133 : cluster [DBG] 8.16 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:58.105124+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 851968 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:59.105307+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:28.719706+0000 osd.1 (osd.1) 134 : cluster [DBG] 8.17 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:28.733836+0000 osd.1 (osd.1) 135 : cluster [DBG] 8.17 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 135) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:28.719706+0000 osd.1 (osd.1) 134 : cluster [DBG] 8.17 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:28.733836+0000 osd.1 (osd.1) 135 : cluster [DBG] 8.17 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 843776 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:00.105556+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 843776 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795465 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:01.105661+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:30.718636+0000 osd.1 (osd.1) 136 : cluster [DBG] 8.19 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:30.732861+0000 osd.1 (osd.1) 137 : cluster [DBG] 8.19 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 137) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:30.718636+0000 osd.1 (osd.1) 136 : cluster [DBG] 8.19 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:30.732861+0000 osd.1 (osd.1) 137 : cluster [DBG] 8.19 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 835584 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:02.105778+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:31.766300+0000 osd.1 (osd.1) 138 : cluster [DBG] 8.1e scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:31.780336+0000 osd.1 (osd.1) 139 : cluster [DBG] 8.1e scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 139) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:31.766300+0000 osd.1 (osd.1) 138 : cluster [DBG] 8.1e scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:31.780336+0000 osd.1 (osd.1) 139 : cluster [DBG] 8.1e scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 827392 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.029523849s of 10.043829918s, submitted: 12
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:03.105883+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:32.751525+0000 osd.1 (osd.1) 140 : cluster [DBG] 9.a scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:32.797341+0000 osd.1 (osd.1) 141 : cluster [DBG] 9.a scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 141) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:32.751525+0000 osd.1 (osd.1) 140 : cluster [DBG] 9.a scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:32.797341+0000 osd.1 (osd.1) 141 : cluster [DBG] 9.a scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 827392 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:04.106016+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:05.106145+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:34.674912+0000 osd.1 (osd.1) 142 : cluster [DBG] 9.10 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:34.696118+0000 osd.1 (osd.1) 143 : cluster [DBG] 9.10 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 143) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:34.674912+0000 osd.1 (osd.1) 142 : cluster [DBG] 9.10 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:34.696118+0000 osd.1 (osd.1) 143 : cluster [DBG] 9.10 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 819200 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 800056 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:06.106302+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 802816 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:07.106417+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:36.678662+0000 osd.1 (osd.1) 144 : cluster [DBG] 9.12 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:36.706877+0000 osd.1 (osd.1) 145 : cluster [DBG] 9.12 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 145) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:36.678662+0000 osd.1 (osd.1) 144 : cluster [DBG] 9.12 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:36.706877+0000 osd.1 (osd.1) 145 : cluster [DBG] 9.12 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 794624 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:08.106604+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 794624 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:09.106785+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 786432 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:10.106906+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 1835008 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802352 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:11.107011+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:40.558668+0000 osd.1 (osd.1) 146 : cluster [DBG] 9.14 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:40.590457+0000 osd.1 (osd.1) 147 : cluster [DBG] 9.14 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 147) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:40.558668+0000 osd.1 (osd.1) 146 : cluster [DBG] 9.14 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:40.590457+0000 osd.1 (osd.1) 147 : cluster [DBG] 9.14 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 1835008 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:12.107139+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 1826816 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:13.107244+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:42.525051+0000 osd.1 (osd.1) 148 : cluster [DBG] 9.1a scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:42.553324+0000 osd.1 (osd.1) 149 : cluster [DBG] 9.1a scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 149) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:42.525051+0000 osd.1 (osd.1) 148 : cluster [DBG] 9.1a scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:42.553324+0000 osd.1 (osd.1) 149 : cluster [DBG] 9.1a scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.788621902s of 10.801385880s, submitted: 10
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 1818624 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:14.107388+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:43.552810+0000 osd.1 (osd.1) 150 : cluster [DBG] 11.5 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:43.566980+0000 osd.1 (osd.1) 151 : cluster [DBG] 11.5 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 151) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:43.552810+0000 osd.1 (osd.1) 150 : cluster [DBG] 11.5 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:43.566980+0000 osd.1 (osd.1) 151 : cluster [DBG] 11.5 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 1810432 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:15.107513+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:44.514538+0000 osd.1 (osd.1) 152 : cluster [DBG] 11.7 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:44.528611+0000 osd.1 (osd.1) 153 : cluster [DBG] 11.7 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 153) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:44.514538+0000 osd.1 (osd.1) 152 : cluster [DBG] 11.7 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:44.528611+0000 osd.1 (osd.1) 153 : cluster [DBG] 11.7 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 1810432 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805796 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:16.107639+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 1794048 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:17.107784+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:46.472300+0000 osd.1 (osd.1) 154 : cluster [DBG] 11.a scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:46.486217+0000 osd.1 (osd.1) 155 : cluster [DBG] 11.a scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 155) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:46.472300+0000 osd.1 (osd.1) 154 : cluster [DBG] 11.a scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:46.486217+0000 osd.1 (osd.1) 155 : cluster [DBG] 11.a scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 1777664 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:18.107962+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:47.461583+0000 osd.1 (osd.1) 156 : cluster [DBG] 11.c scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:47.475985+0000 osd.1 (osd.1) 157 : cluster [DBG] 11.c scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 157) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:47.461583+0000 osd.1 (osd.1) 156 : cluster [DBG] 11.c scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:47.475985+0000 osd.1 (osd.1) 157 : cluster [DBG] 11.c scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 1769472 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:19.108094+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.13 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.13 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 1769472 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:20.108215+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:49.458470+0000 osd.1 (osd.1) 158 : cluster [DBG] 11.13 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:49.472608+0000 osd.1 (osd.1) 159 : cluster [DBG] 11.13 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 159) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:49.458470+0000 osd.1 (osd.1) 158 : cluster [DBG] 11.13 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:49.472608+0000 osd.1 (osd.1) 159 : cluster [DBG] 11.13 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 1769472 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810390 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:21.108363+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:50.486959+0000 osd.1 (osd.1) 160 : cluster [DBG] 11.16 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:50.501076+0000 osd.1 (osd.1) 161 : cluster [DBG] 11.16 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 161) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:50.486959+0000 osd.1 (osd.1) 160 : cluster [DBG] 11.16 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:50.501076+0000 osd.1 (osd.1) 161 : cluster [DBG] 11.16 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 1761280 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:22.108660+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 1761280 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:23.108767+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 1753088 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:24.108903+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 1753088 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:25.109006+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 1736704 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810390 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:26.109101+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 1728512 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:27.109210+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.816082001s of 13.830812454s, submitted: 12
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 1720320 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:28.109326+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:57.383631+0000 osd.1 (osd.1) 162 : cluster [DBG] 11.1d scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:42:57.397718+0000 osd.1 (osd.1) 163 : cluster [DBG] 11.1d scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 163) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:57.383631+0000 osd.1 (osd.1) 162 : cluster [DBG] 11.1d scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:42:57.397718+0000 osd.1 (osd.1) 163 : cluster [DBG] 11.1d scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 1712128 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:29.109493+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 1712128 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:30.109611+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 1712128 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811539 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:31.109776+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 1703936 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:32.110563+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:01.457250+0000 osd.1 (osd.1) 164 : cluster [DBG] 6.1 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:01.470737+0000 osd.1 (osd.1) 165 : cluster [DBG] 6.1 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 165) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:01.457250+0000 osd.1 (osd.1) 164 : cluster [DBG] 6.1 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:01.470737+0000 osd.1 (osd.1) 165 : cluster [DBG] 6.1 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 1695744 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:33.110791+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:02.477205+0000 osd.1 (osd.1) 166 : cluster [DBG] 10.19 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:02.490233+0000 osd.1 (osd.1) 167 : cluster [DBG] 10.19 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 167) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:02.477205+0000 osd.1 (osd.1) 166 : cluster [DBG] 10.19 deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:02.490233+0000 osd.1 (osd.1) 167 : cluster [DBG] 10.19 deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 1687552 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:34.111312+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 1679360 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:35.111438+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 1679360 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 813835 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:36.111575+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 1671168 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:37.111693+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 1671168 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:38.111822+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 1671168 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:39.111989+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.041087151s of 12.048663139s, submitted: 6
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1662976 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:40.112129+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:09.432299+0000 osd.1 (osd.1) 168 : cluster [DBG] 10.13 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:09.446421+0000 osd.1 (osd.1) 169 : cluster [DBG] 10.13 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 169) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:09.432299+0000 osd.1 (osd.1) 168 : cluster [DBG] 10.13 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:09.446421+0000 osd.1 (osd.1) 169 : cluster [DBG] 10.13 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1662976 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 814984 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:41.112346+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 1654784 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:42.112531+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 1654784 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:43.112695+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:12.382964+0000 osd.1 (osd.1) 170 : cluster [DBG] 10.b scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:12.397044+0000 osd.1 (osd.1) 171 : cluster [DBG] 10.b scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 171) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:12.382964+0000 osd.1 (osd.1) 170 : cluster [DBG] 10.b scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:12.397044+0000 osd.1 (osd.1) 171 : cluster [DBG] 10.b scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 1654784 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:44.112924+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:13.355000+0000 osd.1 (osd.1) 172 : cluster [DBG] 10.12 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:13.369112+0000 osd.1 (osd.1) 173 : cluster [DBG] 10.12 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 173) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:13.355000+0000 osd.1 (osd.1) 172 : cluster [DBG] 10.12 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:13.369112+0000 osd.1 (osd.1) 173 : cluster [DBG] 10.12 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 1646592 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:45.113104+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:14.308250+0000 osd.1 (osd.1) 174 : cluster [DBG] 10.10 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:14.322349+0000 osd.1 (osd.1) 175 : cluster [DBG] 10.10 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 175) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:14.308250+0000 osd.1 (osd.1) 174 : cluster [DBG] 10.10 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:14.322349+0000 osd.1 (osd.1) 175 : cluster [DBG] 10.10 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 1646592 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 818430 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:46.113277+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.1a deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.1a deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 1646592 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:47.113505+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:16.360190+0000 osd.1 (osd.1) 176 : cluster [DBG] 10.1a deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:16.373627+0000 osd.1 (osd.1) 177 : cluster [DBG] 10.1a deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 177) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:16.360190+0000 osd.1 (osd.1) 176 : cluster [DBG] 10.1a deep-scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:16.373627+0000 osd.1 (osd.1) 177 : cluster [DBG] 10.1a deep-scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1630208 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:48.113730+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1630208 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:49.113832+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 1622016 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:50.113995+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 1622016 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 819579 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:51.114163+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.929701805s of 11.943083763s, submitted: 10
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1613824 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:52.114328+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:21.375427+0000 osd.1 (osd.1) 178 : cluster [DBG] 10.f scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:21.389567+0000 osd.1 (osd.1) 179 : cluster [DBG] 10.f scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1613824 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 179) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:21.375427+0000 osd.1 (osd.1) 178 : cluster [DBG] 10.f scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:21.389567+0000 osd.1 (osd.1) 179 : cluster [DBG] 10.f scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:53.114498+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1605632 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:54.114702+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:23.316962+0000 osd.1 (osd.1) 180 : cluster [DBG] 10.11 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:23.331075+0000 osd.1 (osd.1) 181 : cluster [DBG] 10.11 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 1597440 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 181) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:23.316962+0000 osd.1 (osd.1) 180 : cluster [DBG] 10.11 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:23.331075+0000 osd.1 (osd.1) 181 : cluster [DBG] 10.11 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:55.114939+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 1597440 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 821876 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:56.115065+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 1597440 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:57.115185+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1589248 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:58.115326+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1589248 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:59.115456+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1581056 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:00.115613+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1581056 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823024 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:01.115815+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:30.434793+0000 osd.1 (osd.1) 182 : cluster [DBG] 10.2 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:30.448873+0000 osd.1 (osd.1) 183 : cluster [DBG] 10.2 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.093025208s of 10.103037834s, submitted: 6
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1572864 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 183) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:30.434793+0000 osd.1 (osd.1) 182 : cluster [DBG] 10.2 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:30.448873+0000 osd.1 (osd.1) 183 : cluster [DBG] 10.2 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:02.116194+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:31.478469+0000 osd.1 (osd.1) 184 : cluster [DBG] 10.6 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:31.492559+0000 osd.1 (osd.1) 185 : cluster [DBG] 10.6 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1564672 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 185) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:31.478469+0000 osd.1 (osd.1) 184 : cluster [DBG] 10.6 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:31.492559+0000 osd.1 (osd.1) 185 : cluster [DBG] 10.6 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:03.116363+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1564672 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:04.116482+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1556480 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:05.116585+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1556480 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824172 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:06.116728+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1548288 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:07.116804+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1540096 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:08.116916+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1540096 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:09.117025+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1540096 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:10.117144+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:39.472770+0000 osd.1 (osd.1) 186 : cluster [DBG] 10.14 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:39.490396+0000 osd.1 (osd.1) 187 : cluster [DBG] 10.14 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1531904 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 825321 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 187) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:39.472770+0000 osd.1 (osd.1) 186 : cluster [DBG] 10.14 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:39.490396+0000 osd.1 (osd.1) 187 : cluster [DBG] 10.14 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:11.117292+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1531904 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:12.117403+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.961690903s of 10.968150139s, submitted: 4
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1523712 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:13.117508+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:42.446627+0000 osd.1 (osd.1) 188 : cluster [DBG] 6.2 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:42.460701+0000 osd.1 (osd.1) 189 : cluster [DBG] 6.2 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1523712 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:14.117679+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 189) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:42.446627+0000 osd.1 (osd.1) 188 : cluster [DBG] 6.2 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:42.460701+0000 osd.1 (osd.1) 189 : cluster [DBG] 6.2 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1523712 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:15.117838+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1515520 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826468 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:16.117998+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1515520 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:17.118107+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 1507328 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:18.118219+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1499136 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:19.118347+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:48.661249+0000 osd.1 (osd.1) 190 : cluster [DBG] 6.6 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:48.678879+0000 osd.1 (osd.1) 191 : cluster [DBG] 6.6 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 191) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:48.661249+0000 osd.1 (osd.1) 190 : cluster [DBG] 6.6 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:48.678879+0000 osd.1 (osd.1) 191 : cluster [DBG] 6.6 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1499136 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:20.118539+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1499136 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827615 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:21.118648+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1490944 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:22.118786+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:51.611872+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.e scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:51.629197+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.e scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 193) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:51.611872+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.e scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:51.629197+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.e scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1490944 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:23.118930+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1482752 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:24.119152+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1474560 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:25.120015+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1466368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828762 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:26.120126+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1458176 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:27.120231+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.151865005s of 15.159042358s, submitted: 6
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1449984 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:28.120349+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:57.606940+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.c scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:43:57.623363+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.c scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 195) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:57.606940+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.c scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:43:57.623363+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.c scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1449984 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:29.120541+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1449984 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:30.120670+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1441792 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 829909 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:31.120792+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1433600 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:32.120956+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:44:01.553658+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.4 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:44:01.581823+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.4 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 197) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:44:01.553658+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.4 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:44:01.581823+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.4 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1425408 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:33.121243+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1425408 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:34.121398+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1417216 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:35.121502+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:44:04.596946+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.b scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:44:04.614802+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.b scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 199) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:44:04.596946+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.b scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:44:04.614802+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.b scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1417216 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832203 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:36.121641+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1417216 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:37.121811+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1409024 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:38.121966+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:44:07.569186+0000 osd.1 (osd.1) 200 : cluster [DBG] 6.d scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:44:07.590371+0000 osd.1 (osd.1) 201 : cluster [DBG] 6.d scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 201) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:44:07.569186+0000 osd.1 (osd.1) 200 : cluster [DBG] 6.d scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:44:07.590371+0000 osd.1 (osd.1) 201 : cluster [DBG] 6.d scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1400832 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:39.122196+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1400832 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:40.122374+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1392640 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833350 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:41.122545+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.939973831s of 13.950237274s, submitted: 8
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1384448 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:42.122730+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:44:11.555965+0000 osd.1 (osd.1) 202 : cluster [DBG] 9.15 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:44:11.584241+0000 osd.1 (osd.1) 203 : cluster [DBG] 9.15 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 203) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:44:11.555965+0000 osd.1 (osd.1) 202 : cluster [DBG] 9.15 scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:44:11.584241+0000 osd.1 (osd.1) 203 : cluster [DBG] 9.15 scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1376256 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:43.122931+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1376256 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:44.123099+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1368064 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:45.123255+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1368064 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:46.123392+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:44:15.542926+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.1f scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  will send 2025-11-26T12:44:15.574728+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.1f scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client handle_log_ack log(last 205) v1
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:44:15.542926+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.1f scrub starts
Nov 26 12:58:09 compute-0 ceph-osd[89328]: log_client  logged 2025-11-26T12:44:15.574728+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.1f scrub ok
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1368064 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:47.123632+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 1359872 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:48.123812+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 1359872 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:49.123974+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 1359872 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:50.124124+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1351680 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:51.124401+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1351680 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:52.124583+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1351680 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:53.124700+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1343488 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:54.124848+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1351680 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:55.125017+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1343488 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:56.125691+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1343488 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:57.125799+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 1335296 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:58.125928+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 1335296 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:59.126065+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1327104 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:00.126168+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1327104 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:01.126279+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1327104 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:02.126451+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1327104 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:03.126624+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1318912 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:04.126839+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1318912 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:05.126993+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 1310720 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:06.127213+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1302528 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:07.128867+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 1294336 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:08.129069+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 1294336 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:09.129249+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 1286144 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:10.129397+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 1286144 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:11.129528+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 1277952 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:12.129687+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 1277952 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:13.129891+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1269760 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:14.130079+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1269760 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:15.130199+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1261568 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:16.130319+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1261568 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:17.130422+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1253376 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:18.130523+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1253376 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:19.130637+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 1245184 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:20.130743+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 1245184 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:21.130809+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 1245184 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:22.130912+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 1236992 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:23.131011+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 1236992 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:24.131215+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1228800 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:25.131348+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1228800 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:26.131504+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1228800 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:27.131650+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1220608 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:28.131830+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1220608 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:29.131978+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1212416 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:30.132148+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1204224 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:31.132362+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1204224 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:32.132505+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1196032 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:33.132629+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1196032 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:34.132816+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1187840 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:35.132945+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1187840 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:36.133084+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1187840 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:37.133220+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1179648 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:38.133358+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1179648 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:39.133488+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1171456 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:40.133641+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1171456 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:41.133810+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1171456 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:42.133975+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1163264 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:43.134164+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1163264 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:44.134349+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1155072 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:45.134516+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1155072 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:46.134692+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1155072 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:47.134827+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1146880 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:48.134983+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1146880 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:49.135149+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1146880 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:50.135340+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 1138688 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:51.135513+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 1138688 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:52.135661+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 1138688 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:53.135794+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1130496 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:54.135977+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 1122304 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:55.136101+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1114112 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:56.136214+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1114112 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:57.136342+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1105920 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:58.136458+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1105920 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:59.136749+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1097728 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:00.136866+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1097728 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:01.136993+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1097728 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:02.137100+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1089536 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:03.137209+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1089536 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:04.137357+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1089536 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:05.137468+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1064960 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:06.137585+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1064960 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:07.137683+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1056768 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:08.137796+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1056768 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:09.137894+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1056768 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:10.137996+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1048576 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:11.138096+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1048576 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:12.138203+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1040384 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:13.138306+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1040384 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:14.138429+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1040384 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:15.138525+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 1032192 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:16.138623+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 1032192 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:17.138733+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1024000 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:18.138808+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1024000 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:19.138931+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1024000 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:20.139037+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 1015808 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:21.139151+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 1015808 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:22.139256+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1007616 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:23.139362+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1007616 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:24.139482+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1007616 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:25.139534+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 999424 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:26.139613+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 999424 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:27.139722+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 999424 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:28.139824+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 991232 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:29.139918+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 991232 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:30.143800+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 983040 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:31.143899+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 983040 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:32.144001+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 983040 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:33.144105+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 974848 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:34.144253+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 974848 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:35.144387+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 966656 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:36.144531+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 966656 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:37.144637+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 966656 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:38.144749+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 958464 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:39.144891+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 958464 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:40.145027+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 950272 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:41.145132+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 950272 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:42.145228+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 950272 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:43.146339+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 942080 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:44.147301+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 942080 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:45.147397+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 933888 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:46.147496+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 933888 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:47.147592+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 933888 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:48.147686+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 925696 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:49.147788+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 917504 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:50.147877+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 917504 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:51.148000+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 909312 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:52.148109+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 909312 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:53.148212+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 892928 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:54.148339+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 892928 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:55.148442+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 892928 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:56.148549+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 884736 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:57.148645+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 884736 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:58.148747+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 876544 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:59.148891+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 884736 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:00.148986+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 884736 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:01.149070+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:02.149168+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 876544 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:03.149264+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 876544 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:04.149414+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 868352 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:05.149540+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 868352 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:06.149651+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 860160 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:07.149776+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 860160 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:08.149867+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 851968 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:09.149996+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 851968 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:10.150102+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 851968 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:11.150251+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 843776 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:12.150384+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 843776 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:13.150492+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 843776 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:14.150663+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 835584 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:15.150782+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 835584 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:16.150880+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 835584 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:17.150979+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 827392 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:18.151084+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 827392 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:19.151212+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 819200 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:20.151309+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 819200 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:21.151410+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 811008 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:22.151506+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 811008 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:23.151603+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 811008 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:24.151735+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 802816 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:25.151832+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 802816 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:26.151959+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 802816 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:27.152060+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 794624 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:28.152169+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 794624 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:29.152262+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 786432 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:30.152422+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 786432 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:31.152549+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 786432 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:32.152642+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 778240 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:33.152734+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 778240 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:34.152864+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 770048 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:35.152979+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 753664 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:36.153085+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 745472 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:37.153186+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 745472 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:38.153291+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 745472 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:39.153387+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 737280 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:40.153485+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 737280 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:41.153582+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 729088 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:42.153670+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 729088 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:43.153804+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 729088 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:44.153910+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 720896 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:45.154002+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 720896 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:46.154090+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 720896 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:47.154177+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 712704 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:48.154267+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 712704 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:49.154392+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 704512 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:50.154535+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 737280 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:51.154684+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 729088 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:52.154809+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 729088 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:53.154912+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 729088 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:54.155034+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 720896 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:55.155195+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 720896 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:56.155292+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 712704 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:57.155410+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 712704 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:58.155526+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 712704 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:59.155649+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 704512 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:00.155794+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 704512 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:01.155920+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 696320 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:02.156051+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 696320 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:03.156143+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 696320 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:04.156295+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 688128 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:05.156408+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 688128 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:06.156532+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 671744 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:07.156696+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 671744 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:08.156812+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 663552 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:09.156948+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 663552 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:10.157081+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 663552 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:11.157208+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 655360 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:12.157328+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 655360 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:13.157455+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 647168 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:14.157628+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 647168 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:15.157829+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 638976 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:16.158043+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 638976 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:17.158171+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 638976 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:18.158290+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 630784 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:19.159508+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 630784 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:20.159679+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 622592 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:21.159826+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 622592 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:22.159970+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 614400 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:23.160102+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 614400 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:24.160555+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 614400 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:25.160670+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 614400 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:26.160803+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 606208 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:27.160932+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 606208 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:28.161085+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 598016 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:29.161201+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 598016 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:30.161329+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 589824 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:31.161443+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 589824 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:32.161554+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 581632 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:33.161661+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 581632 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:34.161801+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 581632 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:35.161920+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 573440 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:36.162026+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 573440 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:37.162143+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 565248 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:38.162246+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 565248 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:39.162344+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 565248 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:40.162438+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 557056 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:41.162537+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 557056 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:42.162641+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 548864 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:43.162740+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 548864 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:44.162867+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 548864 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:45.162962+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 540672 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:46.163067+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 540672 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:47.163165+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 532480 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:48.163268+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 532480 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:49.163383+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 532480 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6699 writes, 27K keys, 6699 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6699 writes, 1243 syncs, 5.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6699 writes, 27K keys, 6699 commit groups, 1.0 writes per commit group, ingest: 19.36 MB, 0.03 MB/s
                                           Interval WAL: 6699 writes, 1243 syncs, 5.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.042       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561fc2fff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:50.163500+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 458752 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:51.163641+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 458752 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:52.163790+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 450560 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:53.163916+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 450560 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:54.164070+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 442368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:55.164207+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 442368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:56.164339+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 442368 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:57.164453+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 434176 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:58.164642+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 434176 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:59.164828+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 425984 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:00.164973+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 425984 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:01.165115+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 425984 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:02.165303+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 417792 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:03.165443+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 417792 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:04.165609+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 409600 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:05.165724+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 401408 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:06.165823+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 393216 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:07.165940+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 385024 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:08.166037+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 385024 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:09.166139+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 376832 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:10.166245+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 376832 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:11.166344+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 368640 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:12.166448+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 368640 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:13.166569+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 368640 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:14.166704+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 360448 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:15.166835+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 360448 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:16.166953+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 352256 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:17.167077+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 352256 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:18.167224+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 352256 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:19.167330+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 344064 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:20.167438+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 335872 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:21.167544+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 335872 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:22.167653+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 327680 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:23.167774+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 327680 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:24.167901+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 319488 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:25.168013+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 327680 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:26.168151+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 327680 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:27.168275+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 319488 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:28.168388+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 319488 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:29.168513+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 319488 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:30.168629+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 311296 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:31.168749+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 311296 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:32.168880+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 303104 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:33.168996+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 303104 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:34.169143+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 303104 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:35.169251+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 294912 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:36.169381+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 294912 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:37.169508+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 286720 heap: 75440128 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 296.003448486s of 296.010253906s, submitted: 4
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:38.169616+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 262144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:39.169718+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 262144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:40.169823+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 262144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:41.169926+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 262144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:42.170083+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 262144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:43.170191+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 262144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:44.170325+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 262144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:45.170448+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 262144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:46.170550+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 262144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:47.170660+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 253952 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:48.170776+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 253952 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:49.170911+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 253952 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:50.171017+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 245760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:51.171161+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 245760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:52.171303+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 237568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:53.171442+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 237568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:54.171602+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 229376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:55.171707+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 229376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:56.171799+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 229376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:57.171905+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 221184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:58.172016+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 221184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:59.172123+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 221184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:00.172229+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 212992 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:01.172347+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 212992 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:02.172459+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 204800 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:03.172560+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 204800 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:04.172698+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 204800 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:05.172808+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 188416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:06.172916+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 180224 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:07.173014+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 172032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:08.173153+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 172032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:09.173251+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 172032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:10.173358+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 163840 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:11.173457+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 163840 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:12.173549+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 155648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:13.173644+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 155648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:14.173799+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 155648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:15.173903+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 147456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:16.174003+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 147456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:17.174146+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 139264 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:18.174249+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 139264 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:19.174366+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 131072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:20.174457+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 131072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:21.174559+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 131072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:22.174666+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 122880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:23.174775+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 122880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:24.174913+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 122880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:25.175001+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 114688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:26.175132+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 114688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:27.175238+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 106496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:28.175376+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 106496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:29.175514+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 106496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:30.175610+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 98304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:31.175727+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 98304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:32.175822+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 90112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:33.175934+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 90112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:34.176063+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 90112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:35.176163+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 81920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:36.176258+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 81920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:37.176360+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 73728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:38.176466+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 73728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:39.176622+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 73728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:40.176725+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 65536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:41.176848+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 65536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:42.177178+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 57344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:43.177287+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 57344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:44.177577+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 49152 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:45.177671+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 49152 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:46.177778+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 49152 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:47.177872+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 40960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:48.178001+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 40960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:49.178094+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 32768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:50.178223+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 32768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:51.178457+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 32768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:52.178585+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 32768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:53.178703+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 32768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:54.178845+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 32768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:55.178953+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 32768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:56.179070+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 32768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:57.179191+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 32768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:58.179312+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 32768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:59.179417+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 32768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:00.179512+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 24576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:01.179625+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 24576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:02.179720+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 24576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:03.179832+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 24576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:04.179972+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 24576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:05.180095+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 24576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:06.180204+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:07.180336+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:08.180447+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:09.180560+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:10.180679+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:11.180801+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:12.180955+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:13.181092+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:14.181262+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:15.181425+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:16.181526+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:17.181617+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:18.181715+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:19.181831+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:20.181926+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:21.182031+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:22.182126+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:23.182222+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:24.182339+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:25.182431+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:26.182542+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:27.182653+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:28.182776+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:29.182923+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:30.183033+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:31.183134+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:32.183235+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:33.183334+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:34.183473+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:35.183573+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:36.183679+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:37.183776+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 0 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:38.183880+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 0 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:39.183983+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 0 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:40.184557+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 0 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:41.184666+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 0 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:42.184770+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 0 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:43.184869+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 0 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:44.185182+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:45.185278+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:46.185381+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:47.185488+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:48.185601+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:49.185713+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:50.185805+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:51.185926+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:52.186070+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:53.186224+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:54.186383+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:55.186560+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:56.186719+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:57.186858+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:58.187021+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:59.187155+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:00.187314+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:01.187430+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:02.187575+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:03.187743+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:04.187999+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:05.188118+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:06.188225+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:07.188355+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:08.188489+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:09.188606+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:10.188733+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:11.188870+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:12.188970+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1040384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:13.189108+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:14.189306+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:15.189449+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:16.189573+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:17.189682+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:18.189924+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:19.190119+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:20.190277+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:21.190422+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:22.190558+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:23.190681+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:24.190895+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:25.191019+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:26.191188+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:27.191328+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:28.191432+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:29.191546+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:30.191662+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:31.191798+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:32.191933+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:33.192049+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1032192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:34.192202+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1024000 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:35.192329+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1024000 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:36.192456+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1024000 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:37.192603+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1024000 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:38.192700+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1024000 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:39.192783+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:40.192877+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:41.193021+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:42.193178+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:43.193285+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:44.193427+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:45.193528+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:46.193626+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:47.193721+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:48.193815+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:49.193963+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:50.194074+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:51.194168+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:52.194265+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:53.194494+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:54.194643+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:55.194790+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:56.194914+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:57.195026+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:58.195150+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:59.195274+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:00.195408+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:01.195527+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:02.195658+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:03.195784+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:04.195919+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1015808 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:05.196018+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:06.196152+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:07.196260+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:08.196420+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:09.196579+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:10.196704+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:11.196809+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:12.196922+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:13.197075+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:14.197231+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:15.197381+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 999424 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:16.197477+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 991232 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:17.197593+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 991232 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:18.197726+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 991232 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:19.197848+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 991232 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:20.197985+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 991232 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:21.198105+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 991232 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:22.198243+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 991232 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:23.198357+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 991232 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:24.198514+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:25.198646+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:26.198802+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:27.198929+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:28.199070+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:29.199225+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:30.199366+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:31.199491+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:32.199624+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:33.199799+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:34.199984+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:35.200088+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:36.200196+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:37.200330+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 983040 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:38.200453+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 974848 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:39.200595+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 974848 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:40.200740+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:41.200853+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:42.200985+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:43.201150+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:44.201318+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:45.201475+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:46.201611+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:47.201748+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:48.201885+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:49.201993+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:50.202093+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:51.202181+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:52.202297+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:53.202398+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:54.202549+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 ms_handle_reset con 0x561fc499a000 session 0x561fc53254a0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: handle_auth_request added challenge on 0x561fc499a800
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 ms_handle_reset con 0x561fc499ac00 session 0x561fc5324b40
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: handle_auth_request added challenge on 0x561fc499a000
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:55.202663+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:56.202849+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:57.203035+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:58.203167+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:59.203301+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:00.203421+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:01.203553+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:02.203680+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:03.203803+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:04.203980+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 966656 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:05.204136+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 958464 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:06.204304+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 958464 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:07.204454+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 958464 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:08.204569+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 958464 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:09.204681+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 950272 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:10.204836+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 950272 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:11.204983+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 950272 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:12.205543+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 950272 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:13.205696+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 950272 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:14.205804+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 950272 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:15.205964+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 950272 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:16.206095+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 950272 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:17.206224+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 950272 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:18.206343+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 950272 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:19.206483+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:20.206632+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:21.206828+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:22.206935+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:23.207077+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:24.207232+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:25.207369+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:26.207488+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:27.207613+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:28.207729+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:29.207856+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:30.207962+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:31.208085+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:32.208217+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:33.208365+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:34.208548+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:35.208869+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:36.209033+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:37.209202+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:38.209367+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:39.209549+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:40.209719+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:41.209861+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:42.210034+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:43.210195+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:44.210382+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:45.210539+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:46.210718+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:47.210883+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 26 12:58:09 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/506480607' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:48.211037+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:49.211173+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:50.211332+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:51.211479+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:52.211655+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:53.211833+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:54.211974+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 942080 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:55.212115+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 933888 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:56.212233+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 933888 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:57.212369+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 933888 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:58.212494+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 933888 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:59.212609+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 933888 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:00.212715+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 933888 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:01.212850+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 933888 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:02.212961+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 925696 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:03.213095+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 925696 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:04.213271+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 925696 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:05.213416+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 925696 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:06.213552+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:07.213731+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:08.213906+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:09.214021+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:10.214143+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:11.214287+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:12.214445+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:13.214576+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:14.214686+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:15.214803+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:16.214929+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:17.215115+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:18.215263+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:19.215395+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:20.215550+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:21.215686+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:22.215846+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:23.216019+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:24.216212+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:25.216321+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:26.216482+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:27.216644+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:28.216785+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:29.216940+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:30.217113+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:31.217244+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:32.217366+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:33.217496+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:34.217667+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:35.217807+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:36.217965+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:37.218115+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:38.218258+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:39.218394+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:40.218543+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:41.218725+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:42.218868+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:43.219007+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:44.219165+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:45.219317+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:46.219428+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:47.219551+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:48.219711+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:49.219831+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:50.219994+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:51.220128+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:52.220288+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:53.220419+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:54.220592+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:55.220715+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:56.220790+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:57.220950+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:58.221111+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:59.221240+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:00.221354+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:01.221476+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:02.221586+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:03.221713+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:04.222006+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:05.222142+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:06.222259+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:07.222388+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:08.222500+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:09.222616+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:10.222789+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:11.222951+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:12.223084+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:13.223248+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:14.223429+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:15.223593+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:16.223783+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:17.223960+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:18.224111+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:19.224268+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:20.224413+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:21.224526+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:22.224649+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:23.224788+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:24.224922+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:25.225053+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:26.225181+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:27.225324+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:28.225469+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:29.225641+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:30.225798+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:31.225928+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:32.226072+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:33.226172+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:34.226326+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:35.226437+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:36.226571+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:37.226728+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:38.226886+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:39.227063+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:40.227206+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:41.227325+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:42.227462+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:43.227596+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:44.227804+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:45.227953+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:46.228113+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:47.228270+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:48.228416+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:49.228538+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:50.228690+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:51.228829+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:52.228974+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:53.229136+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:54.229298+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:55.229421+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:56.229565+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:57.229789+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:58.229947+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:59.230098+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:00.230253+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:01.230400+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:02.230521+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:03.230640+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:04.230798+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:05.230938+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:06.231042+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:07.231148+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:08.231253+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:09.231400+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:10.231527+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:11.231644+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:12.231800+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:13.231919+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:14.232065+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:15.232186+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:16.232318+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:17.232481+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:18.232618+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:19.232789+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:20.232945+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:21.233102+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:22.233250+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:23.233378+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:24.233516+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:25.233649+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:26.233815+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:27.233944+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:28.234098+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:29.234224+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:30.234386+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:31.234509+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:32.234655+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:33.234807+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 868352 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:34.234995+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:35.235130+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:36.235265+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:37.235427+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:38.235553+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:39.235685+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:40.235870+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:41.236017+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:42.236145+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:43.236276+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:44.236476+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:45.236633+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:46.236752+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:47.236933+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:48.237095+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:49.237218+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:50.237335+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:51.237497+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:52.237667+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:53.237807+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:54.238005+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:55.238178+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:56.238392+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:57.238570+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:58.238734+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:59.239023+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:00.239183+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:01.239323+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:02.239484+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:03.239622+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:04.239824+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:05.239962+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:06.240081+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:07.240208+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:08.240342+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:09.240458+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:10.240593+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:11.240736+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:12.240961+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:13.241082+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:14.241239+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:15.241354+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:16.241464+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:17.241618+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:18.241800+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:19.241924+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:20.242049+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:21.242160+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:22.242303+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:23.242414+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:24.242574+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:25.242740+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:26.242920+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:27.243054+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:28.243174+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:29.243311+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:30.243436+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:31.243543+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:32.243653+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:33.243782+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:34.243909+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:35.244014+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 843776 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:36.244125+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 671744 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:37.244233+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: do_command 'config diff' '{prefix=config diff}'
Nov 26 12:58:09 compute-0 ceph-osd[89328]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 26 12:58:09 compute-0 ceph-osd[89328]: do_command 'config show' '{prefix=config show}'
Nov 26 12:58:09 compute-0 ceph-osd[89328]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 26 12:58:09 compute-0 ceph-osd[89328]: do_command 'counter dump' '{prefix=counter dump}'
Nov 26 12:58:09 compute-0 ceph-osd[89328]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 26 12:58:09 compute-0 ceph-osd[89328]: do_command 'counter schema' '{prefix=counter schema}'
Nov 26 12:58:09 compute-0 ceph-osd[89328]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:09 compute-0 ceph-osd[89328]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 1540096 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: bluestore.MempoolThread(0x561fc30ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835646 data_alloc: 218103808 data_used: 225280
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:38.244431+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fca55000/0x0/0x4ffc00000, data 0x11e08d/0x1c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 12:58:09 compute-0 ceph-osd[89328]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1122304 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: tick
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_tickets
Nov 26 12:58:09 compute-0 ceph-osd[89328]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:39.244551+0000)
Nov 26 12:58:09 compute-0 ceph-osd[89328]: do_command 'log dump' '{prefix=log dump}'
Nov 26 12:58:09 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 26 12:58:09 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1392560289' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 12:58:09 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 12:58:09 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14511 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:10 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 26 12:58:10 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/641448869' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 26 12:58:10 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14515 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:10 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:10 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14517 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:10 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/506480607' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 26 12:58:10 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1392560289' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 12:58:10 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/641448869' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 26 12:58:10 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14519 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:10 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14521 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:10 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14523 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:58:11 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14527 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 26 12:58:11 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2054606920' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 26 12:58:11 compute-0 ceph-mon[74966]: from='client.14511 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:11 compute-0 ceph-mon[74966]: from='client.14515 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:11 compute-0 ceph-mon[74966]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:11 compute-0 ceph-mon[74966]: from='client.14517 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:11 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2054606920' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 26 12:58:11 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14531 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:11 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 26 12:58:11 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2718054383' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 26 12:58:11 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14535 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:12 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 26 12:58:12 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024375157' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 26 12:58:12 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:12 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 26 12:58:12 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1760382492' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 26 12:58:12 compute-0 ceph-mon[74966]: from='client.14519 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:12 compute-0 ceph-mon[74966]: from='client.14521 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:12 compute-0 ceph-mon[74966]: from='client.14523 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:12 compute-0 ceph-mon[74966]: from='client.14527 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:12 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2718054383' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 26 12:58:12 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1024375157' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 26 12:58:12 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1760382492' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 26 12:58:12 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 26 12:58:12 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 26 12:58:12 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 26 12:58:12 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4081113466' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000074 1 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000104 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000010
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000031 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000051
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000014
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000091 1 0.000106
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000135 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000050 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000039 1 0.000260
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000066 1 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000118 1 0.000088
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000019 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000028 1 0.000008
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000069
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000064 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000025 1 0.000034
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000046 1 0.000055
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000080 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000178 1 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000211 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000097 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000119 1 0.000128
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000366 1 0.000040
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000356 1 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000165 1 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000246 1 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000277 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000079 1 0.000083
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000077 1 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000102 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000028 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000019
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000028
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000066 1 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000018 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000019 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000123 1 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000021 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000062 1 0.000009
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000082
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000028 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000058 1 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000023 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000031 1 0.000035
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000077 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000160 1 0.000124
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000189 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000014 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000200 1 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000084 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000012
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000104 1 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000149 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000071 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000018 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000179 1 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000019
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000139 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000080 1 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000103 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000021 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000102 1 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000055 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000072 1 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000104 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000018 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=0 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000012
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000037 1 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000032 1 0.000018
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000054 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000048 1 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000014 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000037 1 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000017 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000063 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000216 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 1 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000064 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000012 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000019
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000017 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000075 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000021 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000012 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000032 1 0.000018
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( empty local-lis/les=0/0 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021893 2 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021767 2 0.000045
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000013 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=0 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000064 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000011 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000007
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000234 1 0.000202
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000019 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000019
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 50 handle_osd_map epochs [50,50], i have 50, src has [1,50]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000021 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000019 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000046 1 0.000019
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000107 1 0.000028
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000039 1 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000012 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000018 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 1 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000021 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=0 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.022086 2 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020213 2 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021859 2 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020171 2 0.000177
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.1( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020787 2 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019750 2 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017085 2 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013767 2 0.000039
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012710 2 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.012178 2 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.010651 2 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010516 2 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010475 2 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010344 2 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010116 2 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019076 2 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010031 2 0.000134
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009719 2 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009328 2 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008525 2 0.000190
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.007821 2 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007819 2 0.000018
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007365 2 0.000051
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007158 2 0.000019
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006485 2 0.000019
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006287 2 0.000046
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006218 2 0.000017
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007391 2 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.006394 2 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.d( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006110 2 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005860 2 0.000031
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005375 2 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.004800 2 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.e( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004743 2 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005389 2 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.005868 2 0.000018
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.15( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.006632 2 0.000019
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.9( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006036 2 0.000017
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000037 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005773 2 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 50 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 50 heartbeat osd_stat(store_statfs(0x4fe142000/0x0/0x4ffc00000, data 0x3c3d0/0x8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:20.764594+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 50 handle_osd_map epochs [50,51], i have 50, src has [1,51]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 50 handle_osd_map epochs [50,51], i have 51, src has [1,51]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.889147 2 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.896676 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.887245 2 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.893188 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.886953 2 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.888490 2 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.891786 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.895956 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.898279 2 0.000036
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.898478 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.898491 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.885783 2 0.000037
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.892487 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.889383 2 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.897428 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000244 1 0.000256
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.889995 2 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.900265 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.890267 2 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.900846 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.890685 2 0.000054
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.916361 2 0.000038
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.903062 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894865 2 0.000017
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.916710 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.885644 2 0.000343
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.892046 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894949 2 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.916922 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.889601 2 0.000013
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.889354 2 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.895896 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.899421 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.889143 2 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.895529 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.895987 2 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.896060 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.896070 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000032 1 0.000042
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.890055 2 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.898086 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.898178 2 0.000033
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.898262 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.898275 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000021 1 0.000031
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.897519 2 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.897630 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.897641 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000029 1 0.000038
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.886675 2 0.000028
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.892132 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.889891 2 0.000017
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.897103 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.897208 2 0.000048
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.897321 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.897338 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000021 1 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.911484 2 0.000036
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.911556 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.911568 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000021 1 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.891660 2 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000010 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.911912 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.896673 2 0.000031
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.896762 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.896774 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000023 1 0.000050
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.890474 2 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.900274 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.917109 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.888167 2 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.907468 2 0.000040
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.917129 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.893104 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.907690 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.907704 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 44'48 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000021 1 0.000034
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.891687 2 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.908846 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000055 1 0.000708
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.890731 2 0.000019
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.909872 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.891825 2 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.911635 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.902879 2 0.000040
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.903165 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.903176 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.891656 2 0.000035
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.904756 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000022 1 0.000032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.891758 2 0.000041
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.905928 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.888816 2 0.000039
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.895472 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.891287 2 0.000034
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 44'50 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.902014 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.886701 2 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.892544 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.902215 2 0.000045
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.902326 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.902337 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000026 1 0.000036
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.888796 2 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.894979 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.890859 2 0.000017
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.899611 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.896625 2 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.896719 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.896731 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.891265 2 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.901827 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000031 1 0.000051
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000458 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.888808 2 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.894739 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 0'0 (0'0,49'65] local-lis/les=47/49 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 44'46 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894077 2 0.000071
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.916057 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894250 2 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.916413 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.890427 2 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.895872 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 lc 0'0 (0'0,44'64] local-lis/les=47/49 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.897517 2 0.000028
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.897590 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.897601 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000038 1 0.000048
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.892888 2 0.000041
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.903004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.901223 2 0.000052
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.901381 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.901395 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000023 1 0.000032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894277 2 0.000019
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894496 2 0.000059
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.915151 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.914860 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=45/46 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.915677 2 0.000050
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.915822 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.915914 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000021 1 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.899775 2 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.899838 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.899849 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000029 1 0.000039
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.892217 2 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.898487 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=47/48 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.932418 7 0.000057
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.932442 7 0.000059
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.932235 7 0.000054
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.932159 7 0.000056
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.932138 7 0.000184
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 51 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.932907 7 0.000060
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894559 2 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.904975 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004883 4 0.000054
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005049 3 0.000084
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006681 3 0.000089
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.006444 3 0.000088
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.4( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006810 3 0.000209
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000242 2 0.000034
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 lc 44'56 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 44'48 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 51 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 44'50 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006819 4 0.000052
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006867 4 0.000036
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.006837 5 0.000047
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006792 4 0.000031
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006777 4 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006738 4 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006835 3 0.000055
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.16( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006760 4 0.000457
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.006663 5 0.000049
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006596 3 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006577 4 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.1e( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006421 4 0.000123
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.1( v 44'2 (0'0,44'2] local-lis/les=50/51 n=1 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006394 4 0.000036
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 44'48 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.006332 4 0.000043
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 44'48 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006315 4 0.000057
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006827 4 0.000154
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.19( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006294 4 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006238 4 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 44'50 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.006147 3 0.000036
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 44'50 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.006130 4 0.000042
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006114 3 0.000032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006015 4 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006039 3 0.000035
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.17( v 44'64 (0'0,44'64] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.6( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.7( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005988 4 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006256 4 0.000291
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.e( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007294 4 0.000222
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006466 4 0.000045
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.f( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 44'46 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=47/47 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=45/45 les/c/f=46/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 44'46 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.005674 3 0.000075
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 44'46 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005552 4 0.000034
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005514 3 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.14( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.8( v 44'64 (0'0,44'64] local-lis/les=50/51 n=1 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'64 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005382 4 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.4( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005237 4 0.000032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005234 4 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006271 4 0.000772
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=47/47 les/c/f=48/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005846 4 0.000036
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.10( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004699 4 0.000049
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/22 lis/c=50/43 les/c/f=51/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.011945 2 0.000028
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.011653 1 0.000014
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.9( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 lc 0'0 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.948012 7 0.000055
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.947828 7 0.000037
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 60473344 unmapped: 204800 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.069810 1 0.000032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.081388 1 0.000028
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[11.17( v 44'2 (0'0,44'2] local-lis/les=50/51 n=0 ec=47/38 lis/c=50/47 les/c/f=51/48/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=44'2 mlcod 44'2 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.115997 1 0.000041
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 44'48 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.197482 2 0.000058
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 44'48 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 44'48 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 lc 44'48 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.202559 2 0.000015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.202577 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.129592 1 0.000040
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 44'50 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.327060 3 0.000014
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.e( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 44'50 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 44'50 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 lc 44'50 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.336001 2 0.000016
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.336017 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.069826 1 0.000052
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.d( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.396928 2 0.000012
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.140717 1 0.000311
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=50/51 n=0 ec=45/32 lis/c=50/45 les/c/f=51/46/0 sis=50) [0] r=0 lpr=50 pi=[45,50)/1 crt=33'4 mlcod 33'4 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 44'46 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.536433 3 0.000016
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 44'46 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 44'46 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 lc 44'46 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.541923 2 0.000011
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.541936 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.069757 1 0.000040
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[10.15( v 49'65 (0'0,49'65] local-lis/les=50/51 n=0 ec=47/36 lis/c=50/47 les/c/f=51/49/0 sis=50) [0] r=0 lpr=50 pi=[47,50)/1 crt=49'65 mlcod 49'65 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.595332 1 0.000038
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.595384 1 0.000011
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.408902 1 0.000036
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.275524 1 0.000032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.069654 1 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.067058 1 0.000047
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.662477 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.610532 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.679960 2 0.000013
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.679974 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000325 1 0.000032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.074062 1 0.000069
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.669474 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.617322 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.774604 2 0.000018
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.774619 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000040 1 0.000034
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.177683 2 0.000069
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.586609 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.721635 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.192394 2 0.000098
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.467941 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.736429 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:21.764685+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.214545 2 0.000079
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.284232 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.758432 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.893259 2 0.000017
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.893277 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000040 1 0.000034
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.219835 2 0.000119
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.220217 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.832375 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.146882 2 0.000082
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.146956 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.854509 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 DELETING pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.035968 2 0.000083
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.036049 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.861635 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 60850176 unmapped: 876544 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464840 data_alloc: 218103808 data_used: 24576
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:22.764810+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _renew_subs
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.934506 5 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.934463 5 0.000046
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.934654 5 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.934929 5 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=3 mbc={}] exit Started/Stray 1.936724 5 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=3 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=3 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.9( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.936958 5 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.9( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.9( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.937166 5 0.000304
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.1( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.937405 5 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.937323 5 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.1( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.1( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.3( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.937601 5 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.3( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.3( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.1d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.937687 5 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.1d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.1d( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.937820 5 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.19( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.1b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=2 mbc={}] exit Started/Stray 1.937946 5 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.1b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.1b( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.937844 5 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.15( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.937018 5 0.000506
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.15( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.1f( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.15( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.938349 5 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.7( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.909474 19 0.000031
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.911643 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.911681 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.911714 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.908397 19 0.000037
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.910986 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.911653 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090338707s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.222427368s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.911667 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090310097s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222427368s@ mbc={}] exit Reset 0.000045 1 0.000065
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090310097s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222427368s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090310097s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222427368s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091548920s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.223670959s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090310097s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222427368s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090310097s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222427368s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090310097s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222427368s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091526985s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] exit Reset 0.000040 1 0.000055
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091526985s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091526985s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091526985s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091526985s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091526985s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.909516 19 0.000032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.911429 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.911465 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.911477 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090384483s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.222587585s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090363503s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222587585s@ mbc={}] exit Reset 0.000034 1 0.000049
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090363503s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222587585s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090363503s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222587585s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090363503s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222587585s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090363503s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222587585s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.090363503s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222587585s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.908558 19 0.000034
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.911075 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.911107 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.911119 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091360092s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.223678589s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091345787s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223678589s@ mbc={}] exit Reset 0.000024 1 0.000038
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091345787s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223678589s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091345787s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223678589s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091345787s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223678589s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091345787s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223678589s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=14.091345787s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223678589s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 52 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.9] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1b] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.3] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.b] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 lc 40'116 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003570 4 0.000079
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 lc 40'116 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 lc 40'116 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000058 1 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 lc 40'116 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.5] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.041010 1 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 lc 40'182 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.044845 4 0.000081
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 lc 40'182 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 lc 40'182 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000029 1 0.000037
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 lc 40'182 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038532 1 0.000046
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.083403 4 0.000028
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000029 1 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 lc 39'38 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=3 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 62136320 unmapped: 638976 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.024533 1 0.000033
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 lc 40'44 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.107873 4 0.000050
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 lc 40'44 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 lc 40'44 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000029 1 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 52 pg[9.f( v 44'389 lc 40'44 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.107381 1 0.000049
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.152089 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.086572 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000034 1 0.000055
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 1 0.000082
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.068882 1 0.000017
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.152376 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.086902 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000027 1 0.000066
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000014 1 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.044341 1 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.152352 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.089093 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000019 1 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.151730 6 0.000031
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.151867 6 0.000046
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001395 3 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001777 3 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=6
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=6
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001874 3 0.000017
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 lc 40'384 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped m=1 mbc={}] exit Started/ReplicaActive/RepRecovering 0.046654 4 0.000018
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 lc 40'384 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped m=1 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.154191 7 0.000041
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000049 1 0.000085
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.155702 7 0.000032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000042 1 0.000055
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 DELETING pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.002453 1 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.002544 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.2( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.156793 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 DELETING pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.007602 1 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.007694 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.a( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.163441 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.079817 3 0.000014
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.079832 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000033 1 0.000040
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 lc 40'96 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.290172 7 0.000185
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 lc 40'96 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 lc 40'96 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000056 1 0.000034
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 lc 40'96 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.139129 3 0.000010
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.139142 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000032 1 0.000028
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 DELETING pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.065306 2 0.000093
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.065382 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.6( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 0.296982 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 DELETING pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.024295 2 0.000076
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.024352 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[6.e( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=-1 lpr=52 pi=[43,52)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 0.315384 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.054947 1 0.000034
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 lc 40'48 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.345234 7 0.000176
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 lc 40'48 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 lc 40'48 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000042 1 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 lc 40'48 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052761 1 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 lc 40'110 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.398114 7 0.000203
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 lc 40'110 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 lc 40'110 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000044 1 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 lc 40'110 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059871 1 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 lc 40'58 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.458107 7 0.000131
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 lc 40'58 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 lc 40'58 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000042 1 0.000048
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 lc 40'58 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059755 1 0.000048
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 lc 40'64 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.518111 7 0.000167
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 lc 40'64 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 lc 40'64 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000056 1 0.000057
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 lc 40'64 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: handle_auth_request added challenge on 0x560331b71000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.053640 1 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 lc 40'190 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.571850 7 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 lc 40'190 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 lc 40'190 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000032 1 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 lc 40'190 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.017483 1 0.000057
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.589450 7 0.000041
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000098 1 0.000102
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 lc 40'183 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038585 1 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 lc 40'141 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.628436 7 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 lc 40'141 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 lc 40'141 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000087 1 0.000031
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 lc 40'141 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.031404 1 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 lc 40'76 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.660397 7 0.000041
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 lc 40'76 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 lc 40'76 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000068 1 0.000089
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 lc 40'76 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.031566 1 0.000056
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 lc 40'47 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.691750 7 0.000044
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 lc 40'47 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 lc 40'47 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000061 1 0.000068
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 lc 40'47 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038624 1 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.731014 7 0.000096
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000036 1 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052747 1 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 lc 40'42 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.783556 7 0.000039
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 lc 40'42 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 lc 40'42 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000074 1 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 lc 40'42 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038765 1 0.000062
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 lc 40'384 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped m=1 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.668000 1 0.000035
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 lc 40'384 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped m=1 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 lc 40'384 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped m=1 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000108 1 0.000070
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 lc 40'384 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped m=1 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.010234 1 0.000053
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 53 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:23.764926+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:39:52.876128+0000 osd.0 (osd.0) 50 : cluster [DBG] 2.13 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:39:52.893859+0000 osd.0 (osd.0) 51 : cluster [DBG] 2.13 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 1089536 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 53 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 51) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:39:52.876128+0000 osd.0 (osd.0) 50 : cluster [DBG] 2.13 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:39:52.893859+0000 osd.0 (osd.0) 51 : cluster [DBG] 2.13 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004670 2 0.000103
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006529 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005293 2 0.000038
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006781 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.375191 1 0.000018
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.159035 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.466799 1 0.000018
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 3.093709 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.158932 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 3.093882 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000039 1 0.000052
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000054 1 0.000066
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000016 1 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000015 1 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=0 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=0 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004965 2 0.000093
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006914 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=0 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000018 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=0 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000085 1 0.000046
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.814222 1 0.000018
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.159453 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 3.096430 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000026 1 0.000038
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000013 1 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=0 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=0 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.326621 1 0.000028
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.159637 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 3.096820 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000029
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 1 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.761673 1 0.000017
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.159760 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.701676 1 0.000046
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 3.097186 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.159761 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 3.097103 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000022 1 0.000034
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000027 1 0.000042
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 1 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000016 1 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=0 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000012 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=0 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000037 1 0.000018
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.641931 1 0.000017
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.159901 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 3.097529 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.337414 1 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000030 1 0.000052
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000015 1 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.531668 1 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.159880 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 3.097742 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000031
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 1 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.588172 1 0.000017
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.160037 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 3.097874 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000028
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 1 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.570648 1 0.000017
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.160074 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 3.098037 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000028
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 1 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.500160 1 0.000016
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.160143 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 3.097657 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000025 1 0.000043
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 1 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.429692 1 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.160200 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 3.098566 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000034 1 0.000051
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 1 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=0/0 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.160879 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 3.098593 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[45,51)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=13
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=13
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002293 3 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002305 3 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000276 1 0.001302
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000060 1 0.000033
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002095 2 0.000050
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002008 3 0.000016
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.001922 2 0.000016
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=14
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001888 3 0.000014
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=14
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001756 3 0.000016
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=17
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=17
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001786 3 0.000018
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.001785 2 0.000019
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=19
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=19
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001757 3 0.000017
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001694 3 0.000015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=5
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=5
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001542 3 0.000014
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001634 3 0.000014
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001492 3 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=13
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=13
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001396 3 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002861 2 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000688 3 0.000032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002982 4 0.000053
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003496 4 0.000098
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.11( v 44'389 (0'0,44'389] local-lis/les=53/54 n=6 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003641 4 0.000270
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 54 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/45 les/c/f=54/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:24.765075+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 892928 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005643 2 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007100 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005725 2 0.000016
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007263 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005818 2 0.000127
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007506 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005946 2 0.000043
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007625 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006009 2 0.000035
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007754 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005834 2 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006614 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006218 2 0.000051
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.008069 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006400 2 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008197 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006402 2 0.000016
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008228 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006520 2 0.000043
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006289 2 0.000031
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008444 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008086 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006606 2 0.000020
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.008585 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006727 2 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008777 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006324 2 0.000591
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006863 2 0.000050
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.009272 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.009075 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007237 2 0.000032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009585 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007317 2 0.000039
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009658 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=51/52 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002094 3 0.000119
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002259 3 0.000447
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 55 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=51/45 les/c/f=52/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004483 3 0.000069
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004406 4 0.000038
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.004351 3 0.000081
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004377 4 0.000069
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004286 4 0.000066
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=2 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.004237 4 0.000065
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=2 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.3( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004175 4 0.000040
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.9( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.004070 4 0.000067
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003990 4 0.000044
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.b( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004366 4 0.000168
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003976 4 0.000038
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.5( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000152 1 0.000024
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 lc 33'1 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004495 4 0.000458
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.d( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.004080 4 0.000051
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004701 4 0.000469
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004572 4 0.000048
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/45 les/c/f=55/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:25.765176+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=2 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.770976 2 0.000011
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=2 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=2 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=2 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.770927 3 0.000028
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 55 heartbeat osd_stat(store_statfs(0x4fe11b000/0x0/0x4ffc00000, data 0x4809a/0xb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=54/55 n=2 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.125709 1 0.000062
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=54/55 n=2 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.896723 2 0.000015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=54/55 n=2 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=54/55 n=2 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 647168 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.081341 1 0.000054
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.977988 2 0.000161
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 lc 33'18 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.066624 1 0.000070
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 55 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/50 les/c/f=55/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:26.765294+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 548864 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 622899 data_alloc: 218103808 data_used: 32768
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 14.235205 31 0.000068
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 14.238010 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 14.238049 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 14.238066 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.764075279s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.222412109s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.763993263s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222412109s@ mbc={}] exit Reset 0.000103 1 0.000140
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.763993263s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222412109s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.763993263s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222412109s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.763993263s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222412109s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.763993263s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222412109s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.763993263s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.222412109s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 14.235204 31 0.000057
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 14.237840 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 14.237874 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 14.237887 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.764774323s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 110.223670959s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.764751434s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] exit Reset 0.000039 1 0.000061
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.764751434s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.764751434s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.764751434s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.764751434s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 56 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=9.764751434s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 110.223670959s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 56 handle_osd_map epochs [56,56], i have 56, src has [1,56]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:27.765439+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:39:56.839537+0000 osd.0 (osd.0) 52 : cluster [DBG] 5.14 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:39:56.853363+0000 osd.0 (osd.0) 53 : cluster [DBG] 5.14 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 475136 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 53) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:39:56.839537+0000 osd.0 (osd.0) 52 : cluster [DBG] 5.14 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:39:56.853363+0000 osd.0 (osd.0) 53 : cluster [DBG] 5.14 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.858395 6 0.000055
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.858961 6 0.000192
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=0 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=0 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000029 1 0.000049
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000127 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000094 1 0.000250
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=0 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000037 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=0 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000105 1 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.001582 2 0.000072
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.001506 2 0.000272
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.011062 3 0.000041
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.011095 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000052 1 0.000087
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 DELETING pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.124424 2 0.000139
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.124530 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.c( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 0.994662 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.262468 3 0.000116
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.262512 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000078 1 0.000096
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 DELETING pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.030861 2 0.000123
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.030991 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 57 pg[6.4( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=2 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=-1 lpr=56 pi=[43,56)/1 luod=0'0 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.152003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:28.765594+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:39:57.882213+0000 osd.0 (osd.0) 54 : cluster [DBG] 5.15 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:39:57.896102+0000 osd.0 (osd.0) 55 : cluster [DBG] 5.15 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 360448 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 57 handle_osd_map epochs [57,58], i have 57, src has [1,58]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 57 handle_osd_map epochs [57,58], i have 58, src has [1,58]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 55) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:39:57.882213+0000 osd.0 (osd.0) 54 : cluster [DBG] 5.15 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:39:57.896102+0000 osd.0 (osd.0) 55 : cluster [DBG] 5.15 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996569 2 0.000135
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.998496 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997202 2 0.000126
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.998948 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002057 3 0.000100
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000038 1 0.000037
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 lc 33'11 (0'0,37'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002726 3 0.000540
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.066926 3 0.000021
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=57/58 n=2 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.066503 3 0.000034
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000019 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 lc 33'13 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.125943 1 0.000108
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 58 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/50 les/c/f=58/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:29.765753+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 319488 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:30.765940+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:39:59.821572+0000 osd.0 (osd.0) 56 : cluster [DBG] 2.16 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:39:59.835692+0000 osd.0 (osd.0) 57 : cluster [DBG] 2.16 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.508807182s of 10.792241096s, submitted: 582
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 507904 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 57) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:39:59.821572+0000 osd.0 (osd.0) 56 : cluster [DBG] 2.16 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:39:59.835692+0000 osd.0 (osd.0) 57 : cluster [DBG] 2.16 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 58 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4cfab/0xba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 58 handle_osd_map epochs [59,60], i have 58, src has [1,60]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 58 handle_osd_map epochs [59,60], i have 60, src has [1,60]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:31.766072+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:00.799009+0000 osd.0 (osd.0) 58 : cluster [DBG] 2.8 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:00.813137+0000 osd.0 (osd.0) 59 : cluster [DBG] 2.8 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64380928 unmapped: 491520 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 639268 data_alloc: 218103808 data_used: 40960
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 59) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:00.799009+0000 osd.0 (osd.0) 58 : cluster [DBG] 2.8 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:00.813137+0000 osd.0 (osd.0) 59 : cluster [DBG] 2.8 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:32.766232+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.2 deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.2 deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64380928 unmapped: 491520 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 60 handle_osd_map epochs [61,62], i have 60, src has [1,62]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.234440 16 0.000050
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.236759 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 9.243871 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] exit Started 9.243886 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.765221596s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 active pruub 122.301292419s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.765181541s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.301292419s@ mbc={}] exit Reset 0.000069 2 0.000094
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.765181541s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.301292419s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.765181541s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.301292419s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.765181541s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.301292419s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.765181541s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.301292419s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.765181541s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.301292419s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.231782 15 0.000366
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.236728 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 9.244493 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] exit Started 9.244508 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.767482758s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 active pruub 122.303794861s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.767401695s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303794861s@ mbc={}] exit Reset 0.000113 2 0.000111
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.767401695s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303794861s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.767401695s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303794861s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.767401695s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303794861s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.767401695s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303794861s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.767401695s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303794861s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.242931 18 0.000044
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.245966 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 10.252906 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started 10.252935 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.232728 15 0.000041
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.237132 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 9.245587 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] exit Started 9.245605 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=14.757040977s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=44'389 mlcod 0'0 active pruub 121.293930054s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 61 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.766945839s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 active pruub 122.303878784s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.766919136s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303878784s@ mbc={}] exit Reset 0.000041 2 0.000061
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.766919136s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303878784s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.766919136s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303878784s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.766919136s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303878784s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.766919136s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303878784s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=15.766919136s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 122.303878784s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=14.756837845s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 121.293930054s@ mbc={}] exit Reset 0.000242 2 0.000316
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=14.756837845s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 121.293930054s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=14.756837845s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 121.293930054s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=14.756837845s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 121.293930054s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=14.756837845s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 121.293930054s@ mbc={}] exit Start 0.000052 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 62 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=14.756837845s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 121.293930054s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 62 handle_osd_map epochs [61,62], i have 62, src has [1,62]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 62 heartbeat osd_stat(store_statfs(0x4fe10d000/0x0/0x4ffc00000, data 0x507be/0xc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:33.766352+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:02.804545+0000 osd.0 (osd.0) 60 : cluster [DBG] 5.2 deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:02.818665+0000 osd.0 (osd.0) 61 : cluster [DBG] 5.2 deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 385024 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 62 handle_osd_map epochs [63,63], i have 62, src has [1,63]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 61) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:02.804545+0000 osd.0 (osd.0) 60 : cluster [DBG] 5.2 deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:02.818665+0000 osd.0 (osd.0) 61 : cluster [DBG] 5.2 deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=-1 lpr=61 pi=[53,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.808211 3 0.000135
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=-1 lpr=61 pi=[53,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.808311 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=61) [2] r=-1 lpr=61 pi=[53,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000046 1 0.000070
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.809073 3 0.000040
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.809098 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000039 1 0.000054
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.809448 3 0.000036
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.809473 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000024 1 0.000039
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000121 1 0.000120
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 1 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.808706 3 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.808795 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=61) [2] r=-1 lpr=61 pi=[54,61)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000151 1 0.000161
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000031 1 0.000113
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000017 1 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 63 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:34.766528+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 335872 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 63 handle_osd_map epochs [63,64], i have 63, src has [1,64]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 63 handle_osd_map epochs [63,64], i have 64, src has [1,64]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004246 4 0.000087
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004193 4 0.000256
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.004525 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004422 4 0.000040
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.004489 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.004723 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004914 4 0.000059
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.005098 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 64 handle_osd_map epochs [64,64], i have 64, src has [1,64]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.524254 5 0.000182
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000045 1 0.000033
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.524637 5 0.000576
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.524784 5 0.000361
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000618 1 0.000016
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.525908 5 0.000486
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.021297 2 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.021801 1 0.000038
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000449 1 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038294 2 0.000085
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.060552 1 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000397 1 0.000037
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.112446 1 0.000016
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052492 2 0.000065
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000491 1 0.000073
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:35.766634+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038212 2 0.000066
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 64 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 64 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.170438 1 0.000052
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 0.847686 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 1.852244 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 1.852263 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.209132 1 0.000176
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 0.847598 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 1.852099 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 1.852117 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.262173 1 0.000063
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 0.847611 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 1.852390 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 1.852405 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[54,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.301039 1 0.000113
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 0.847402 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 1.852513 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 1.852526 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[53,63)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65 pruub=15.676793098s) [2] async=[2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 44'389 active pruub 124.874816895s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676989555s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 44'389 active pruub 124.874961853s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.678121567s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 44'389 active pruub 124.876121521s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65 pruub=15.676681519s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874816895s@ mbc={}] exit Reset 0.000126 1 0.000160
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65 pruub=15.676681519s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874816895s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65 pruub=15.676681519s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874816895s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65 pruub=15.676681519s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874816895s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65 pruub=15.676681519s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874816895s@ mbc={}] exit Start 0.000052 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65 pruub=15.676681519s) [2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874816895s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.677809715s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.876121521s@ mbc={}] exit Reset 0.000427 1 0.000457
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.677809715s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.876121521s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.677809715s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.876121521s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.677809715s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.876121521s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.677809715s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.876121521s@ mbc={}] exit Start 0.000008 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.677809715s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.876121521s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676671028s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874961853s@ mbc={}] exit Reset 0.000337 1 0.000408
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676671028s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874961853s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676671028s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874961853s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676671028s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874961853s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676671028s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874961853s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676671028s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874961853s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676982880s) [2] async=[2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 44'389 active pruub 124.874923706s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676142693s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874923706s@ mbc={}] exit Reset 0.000859 1 0.000926
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676142693s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874923706s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676142693s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874923706s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676142693s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874923706s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676142693s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874923706s@ mbc={}] exit Start 0.000173 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 65 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65 pruub=15.676142693s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 124.874923706s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 1171456 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:36.766799+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:05.828579+0000 osd.0 (osd.0) 62 : cluster [DBG] 2.b scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:05.842691+0000 osd.0 (osd.0) 63 : cluster [DBG] 2.b scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 65 heartbeat osd_stat(store_statfs(0x4fe0fb000/0x0/0x4ffc00000, data 0x58fe2/0xcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019045 6 0.000450
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020051 6 0.000164
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019980 6 0.000064
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020015 6 0.000178
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000308 1 0.000031
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000632 2 0.000062
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000789 2 0.000095
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000727 2 0.000185
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 1105920 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 646305 data_alloc: 218103808 data_used: 49152
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] lb MIN local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 DELETING pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.075939 3 0.000243
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] lb MIN local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.076402 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.f( v 44'389 (0'0,44'389] lb MIN local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.095763 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] lb MIN local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=-1 lpr=65 DELETING pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.097630 2 0.000314
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] lb MIN local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.098371 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.17( v 44'389 (0'0,44'389] lb MIN local-lis/les=63/64 n=5 ec=45/34 lis/c=63/53 les/c/f=64/54/0 sis=65) [2] r=-1 lpr=65 pi=[53,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.118542 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 DELETING pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.134430 2 0.000104
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.135327 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.1f( v 44'389 (0'0,44'389] lb MIN local-lis/les=63/64 n=5 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.155440 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 63) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:05.828579+0000 osd.0 (osd.0) 62 : cluster [DBG] 2.b scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:05.842691+0000 osd.0 (osd.0) 63 : cluster [DBG] 2.b scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] lb MIN local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 DELETING pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.171542 2 0.000237
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] lb MIN local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.172369 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 66 pg[9.7( v 44'389 (0'0,44'389] lb MIN local-lis/les=63/64 n=6 ec=45/34 lis/c=63/54 les/c/f=64/55/0 sis=65) [2] r=-1 lpr=65 pi=[54,65)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.192453 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:37.766990+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 1081344 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:38.767122+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 1081344 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 66 heartbeat osd_stat(store_statfs(0x4fe100000/0x0/0x4ffc00000, data 0x5a7ef/0xcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:39.767246+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:08.839098+0000 osd.0 (osd.0) 64 : cluster [DBG] 2.1f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:08.853210+0000 osd.0 (osd.0) 65 : cluster [DBG] 2.1f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 1064960 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 65) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:08.839098+0000 osd.0 (osd.0) 64 : cluster [DBG] 2.1f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:08.853210+0000 osd.0 (osd.0) 65 : cluster [DBG] 2.1f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.636274 62 0.000112
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.638560 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.638912 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.638931 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67 pruub=12.363718033s) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 126.222518921s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67 pruub=12.363677025s) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.222518921s@ mbc={}] exit Reset 0.000066 1 0.000097
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67 pruub=12.363677025s) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.222518921s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67 pruub=12.363677025s) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.222518921s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67 pruub=12.363677025s) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.222518921s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67 pruub=12.363677025s) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.222518921s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 67 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67 pruub=12.363677025s) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.222518921s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:40.767422+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 67 heartbeat osd_stat(store_statfs(0x4fe0fd000/0x0/0x4ffc00000, data 0x5c507/0xd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.111890793s of 10.161882401s, submitted: 54
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.359934 6 0.000086
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000076 2 0.000032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.001071 1 0.000037
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.001189 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 68 pg[6.8( v 37'39 (0'0,37'39] lb MIN local-lis/les=43/44 n=1 ec=43/21 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=-1 lpr=67 pi=[43,67)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.361164 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1056768 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:41.767526+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 69 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=0 pi=[50,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000077 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=0 pi=[50,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000044
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000076 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000109 1 0.000172
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000395 2 0.000071
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 69 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1040384 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 634195 data_alloc: 218103808 data_used: 49152
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:42.767631+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 70 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997009 2 0.000099
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 70 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997611 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 70 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=50/51 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=37'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=50/50 les/c/f=51/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=69/50 les/c/f=70/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001300 3 0.000088
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=69/50 les/c/f=70/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=69/50 les/c/f=70/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000032 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 70 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=69/70 n=1 ec=43/21 lis/c=69/50 les/c/f=70/51/0 sis=69) [0] r=0 lpr=69 pi=[50,69)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 70 handle_osd_map epochs [69,70], i have 70, src has [1,70]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 70 handle_osd_map epochs [70,70], i have 70, src has [1,70]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1032192 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:43.767774+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 1024000 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:44.767889+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 1015808 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:45.768004+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 950272 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:46.768131+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:15.817082+0000 osd.0 (osd.0) 66 : cluster [DBG] 5.3 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:15.831278+0000 osd.0 (osd.0) 67 : cluster [DBG] 5.3 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 71 heartbeat osd_stat(store_statfs(0x4fe0f1000/0x0/0x4ffc00000, data 0x63188/0xdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 925696 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 640262 data_alloc: 218103808 data_used: 57344
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 67) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:15.817082+0000 osd.0 (osd.0) 66 : cluster [DBG] 5.3 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:15.831278+0000 osd.0 (osd.0) 67 : cluster [DBG] 5.3 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:47.768311+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 925696 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:48.768418+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 901120 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:49.768519+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 901120 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:50.768689+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 892928 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=0 pi=[52,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000071 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=0 pi=[52,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000047
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000072 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000140 1 0.000180
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.300171852s of 10.322259903s, submitted: 33
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000475 2 0.000074
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000019 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 72 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=37'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:51.768789+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 901120 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 643708 data_alloc: 218103808 data_used: 57344
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 73 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.813110 2 0.000113
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 73 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=37'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.813854 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 73 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=37'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=72/73 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=72/73 n=1 ec=43/21 lis/c=52/52 les/c/f=53/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=72/73 n=1 ec=43/21 lis/c=72/52 les/c/f=73/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001023 3 0.000198
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=72/73 n=1 ec=43/21 lis/c=72/52 les/c/f=73/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=72/73 n=1 ec=43/21 lis/c=72/52 les/c/f=73/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 73 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=72/73 n=1 ec=43/21 lis/c=72/52 les/c/f=73/53/0 sis=72) [0] r=0 lpr=72 pi=[52,72)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 73 handle_osd_map epochs [73,73], i have 73, src has [1,73]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fe0eb000/0x0/0x4ffc00000, data 0x668d3/0xe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:52.768888+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 892928 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:53.768982+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 901120 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:54.769100+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:24.666568+0000 osd.0 (osd.0) 68 : cluster [DBG] 5.5 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:24.680702+0000 osd.0 (osd.0) 69 : cluster [DBG] 5.5 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 884736 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 69) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:24.666568+0000 osd.0 (osd.0) 68 : cluster [DBG] 5.5 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:24.680702+0000 osd.0 (osd.0) 69 : cluster [DBG] 5.5 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:55.769275+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 884736 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:56.769366+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fe0ec000/0x0/0x4ffc00000, data 0x668d3/0xe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 884736 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 648147 data_alloc: 218103808 data_used: 65536
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:57.769480+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:27.608200+0000 osd.0 (osd.0) 70 : cluster [DBG] 2.1c scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:27.622269+0000 osd.0 (osd.0) 71 : cluster [DBG] 2.1c scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 884736 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 71) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:27.608200+0000 osd.0 (osd.0) 70 : cluster [DBG] 2.1c scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:27.622269+0000 osd.0 (osd.0) 71 : cluster [DBG] 2.1c scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:58.769606+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 884736 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fe0ec000/0x0/0x4ffc00000, data 0x668d3/0xe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 73 handle_osd_map epochs [74,74], i have 74, src has [1,74]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 33.737828 55 0.000195
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 34.720219 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 35.729378 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 35.729476 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=13.283830643s) [1] r=-1 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 37'39 active pruub 146.304275513s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=13.283709526s) [1] r=-1 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 146.304275513s@ mbc={}] exit Reset 0.000155 1 0.000422
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=13.283709526s) [1] r=-1 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 146.304275513s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=13.283709526s) [1] r=-1 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 146.304275513s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=13.283709526s) [1] r=-1 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 146.304275513s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=13.283709526s) [1] r=-1 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 146.304275513s@ mbc={}] exit Start 0.000047 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 74 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=13.283709526s) [1] r=-1 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 146.304275513s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:39:59.769703+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 860160 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=-1 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.385161 6 0.000170
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=-1 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=-1 lpr=74 pi=[54,74)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=-1 lpr=74 pi=[54,74)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.009361 3 0.000037
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=-1 lpr=74 pi=[54,74)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.009382 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=-1 lpr=74 pi=[54,74)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=-1 lpr=74 pi=[54,74)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=-1 lpr=74 pi=[54,74)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000065 1 0.000054
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=-1 lpr=74 pi=[54,74)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=-1 lpr=74 DELETING pi=[54,74)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.008575 2 0.000177
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=-1 lpr=74 pi=[54,74)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.008684 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 75 pg[6.b( v 37'39 (0'0,37'39] lb MIN local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=74) [1] r=-1 lpr=74 pi=[54,74)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 0.403332 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:00.769828+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 75 heartbeat osd_stat(store_statfs(0x4fe0e5000/0x0/0x4ffc00000, data 0x6a30b/0xe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 843776 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 76 heartbeat osd_stat(store_statfs(0x4fe0e5000/0x0/0x4ffc00000, data 0x6a30b/0xe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.265448570s of 10.303452492s, submitted: 21
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:01.769924+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:31.587009+0000 osd.0 (osd.0) 72 : cluster [DBG] 2.f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:31.600933+0000 osd.0 (osd.0) 73 : cluster [DBG] 2.f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 843776 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 658418 data_alloc: 218103808 data_used: 73728
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 73) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:31.587009+0000 osd.0 (osd.0) 72 : cluster [DBG] 2.f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:31.600933+0000 osd.0 (osd.0) 73 : cluster [DBG] 2.f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 33.126031 55 0.000423
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 33.321370 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 34.319881 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 34.319897 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=14.681272507s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 37'39 active pruub 150.319351196s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=14.680977821s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 150.319351196s@ mbc={}] exit Reset 0.000322 1 0.000335
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=14.680977821s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 150.319351196s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=14.680977821s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 150.319351196s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=14.680977821s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 150.319351196s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=14.680977821s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 150.319351196s@ mbc={}] exit Start 0.000287 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 77 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=14.680977821s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 150.319351196s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:02.770045+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 770048 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.785217 6 0.000459
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.068848 3 0.000123
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.068903 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000041 1 0.000038
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 DELETING pi=[57,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.016066 2 0.000095
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.016133 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 78 pg[6.d( v 37'39 (0'0,37'39] lb MIN local-lis/les=57/58 n=1 ec=43/21 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 0.870653 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:03.770143+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 770048 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 79 heartbeat osd_stat(store_statfs(0x4fe0dd000/0x0/0x4ffc00000, data 0x6f50b/0xf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:04.770242+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 712704 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:05.770367+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 679936 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 79 heartbeat osd_stat(store_statfs(0x4fe0d9000/0x0/0x4ffc00000, data 0x70f3e/0xf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:06.770502+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:36.636261+0000 osd.0 (osd.0) 74 : cluster [DBG] 5.7 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:36.649934+0000 osd.0 (osd.0) 75 : cluster [DBG] 5.7 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 581632 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 666325 data_alloc: 218103808 data_used: 114688
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 79 heartbeat osd_stat(store_statfs(0x4fe0d9000/0x0/0x4ffc00000, data 0x70f3e/0xf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 75) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:36.636261+0000 osd.0 (osd.0) 74 : cluster [DBG] 5.7 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:36.649934+0000 osd.0 (osd.0) 75 : cluster [DBG] 5.7 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.18 deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.18 deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:07.770671+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:37.678931+0000 osd.0 (osd.0) 76 : cluster [DBG] 2.18 deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:37.693006+0000 osd.0 (osd.0) 77 : cluster [DBG] 2.18 deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 565248 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 77) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:37.678931+0000 osd.0 (osd.0) 76 : cluster [DBG] 2.18 deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:37.693006+0000 osd.0 (osd.0) 77 : cluster [DBG] 2.18 deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 79 heartbeat osd_stat(store_statfs(0x4fe0db000/0x0/0x4ffc00000, data 0x70f3e/0xf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:08.770803+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 532480 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:09.770932+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 532480 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:10.771065+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 516096 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:11.771184+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.157444954s of 10.185270309s, submitted: 16
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 516096 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 672792 data_alloc: 218103808 data_used: 122880
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=37'39 mlcod 37'39 active+clean] exit Started/Primary/Active/Clean 46.372446 77 0.000299
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary/Active 47.148006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started/Primary 48.156101 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=37'39 mlcod 37'39 active mbc={255={}}] exit Started 48.156125 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=37'39 mlcod 37'39 active mbc={255={}}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81 pruub=8.856573105s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 37'39 active pruub 154.304397583s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81 pruub=8.856448174s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 154.304397583s@ mbc={}] exit Reset 0.000253 1 0.000354
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81 pruub=8.856448174s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 154.304397583s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81 pruub=8.856448174s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 154.304397583s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81 pruub=8.856448174s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 154.304397583s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81 pruub=8.856448174s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 154.304397583s@ mbc={}] exit Start 0.000048 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 81 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81 pruub=8.856448174s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 154.304397583s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 81 handle_osd_map epochs [81,81], i have 81, src has [1,81]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:12.771321+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:41.771956+0000 osd.0 (osd.0) 78 : cluster [DBG] 5.4 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:41.786056+0000 osd.0 (osd.0) 79 : cluster [DBG] 5.4 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 450560 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 79) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:41.771956+0000 osd.0 (osd.0) 78 : cluster [DBG] 5.4 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:41.786056+0000 osd.0 (osd.0) 79 : cluster [DBG] 5.4 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=-1 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007598 7 0.000255
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=-1 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=-1 lpr=81 pi=[54,81)/1 crt=37'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 82 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=-1 lpr=81 pi=[54,81)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.129856 2 0.000059
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=-1 lpr=81 pi=[54,81)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.129902 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=-1 lpr=81 pi=[54,81)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=-1 lpr=81 pi=[54,81)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=-1 lpr=81 pi=[54,81)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000049 1 0.000064
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=-1 lpr=81 pi=[54,81)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=-1 lpr=81 DELETING pi=[54,81)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.023672 2 0.000136
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=-1 lpr=81 pi=[54,81)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.023757 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 82 pg[6.f( v 37'39 (0'0,37'39] lb MIN local-lis/les=54/55 n=1 ec=43/21 lis/c=54/54 les/c/f=55/55/0 sis=81) [2] r=-1 lpr=81 pi=[54,81)/1 luod=0'0 crt=37'39 mlcod 0'0 active mbc={}] exit Started 1.161366 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:13.771443+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 1 last_log 80 sent 79 num 1 unsent 1 sending 1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:43.760456+0000 osd.0 (osd.0) 80 : cluster [DBG] 2.1d scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 82 heartbeat osd_stat(store_statfs(0x4fe0d2000/0x0/0x4ffc00000, data 0x76145/0xfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 409600 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 80) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:43.760456+0000 osd.0 (osd.0) 80 : cluster [DBG] 2.1d scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:14.771605+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 1 last_log 81 sent 80 num 1 unsent 1 sending 1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:43.774352+0000 osd.0 (osd.0) 81 : cluster [DBG] 2.1d scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65544192 unmapped: 376832 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 81) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:43.774352+0000 osd.0 (osd.0) 81 : cluster [DBG] 2.1d scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:15.771731+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 352256 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:16.771863+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 344064 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 681818 data_alloc: 218103808 data_used: 122880
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 84 heartbeat osd_stat(store_statfs(0x4fe0ca000/0x0/0x4ffc00000, data 0x7983f/0x100000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:17.771968+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 344064 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:18.772089+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 327680 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 84 heartbeat osd_stat(store_statfs(0x4fe0ca000/0x0/0x4ffc00000, data 0x7983f/0x100000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:19.772191+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 319488 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:20.772330+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 85 heartbeat osd_stat(store_statfs(0x4fe0ca000/0x0/0x4ffc00000, data 0x7b3bc/0x103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 85 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 56.928731 97 0.000233
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 56.932494 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 57.939321 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] exit Started 57.939350 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=53) [0] r=0 lpr=53 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86 pruub=15.071921349s) [2] r=-1 lpr=86 pi=[53,86)/1 crt=44'389 mlcod 0'0 active pruub 169.294906616s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86 pruub=15.071560860s) [2] r=-1 lpr=86 pi=[53,86)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 169.294906616s@ mbc={}] exit Reset 0.000394 1 0.000516
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86 pruub=15.071560860s) [2] r=-1 lpr=86 pi=[53,86)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 169.294906616s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86 pruub=15.071560860s) [2] r=-1 lpr=86 pi=[53,86)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 169.294906616s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86 pruub=15.071560860s) [2] r=-1 lpr=86 pi=[53,86)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 169.294906616s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86 pruub=15.071560860s) [2] r=-1 lpr=86 pi=[53,86)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 169.294906616s@ mbc={}] exit Start 0.000092 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 86 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86 pruub=15.071560860s) [2] r=-1 lpr=86 pi=[53,86)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 169.294906616s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 311296 heap: 65921024 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=-1 lpr=86 pi=[53,86)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.268342 3 0.000372
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=-1 lpr=86 pi=[53,86)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.268493 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=86) [2] r=-1 lpr=86 pi=[53,86)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000065 1 0.000091
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 87 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001749 2 0.000031
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 87 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:21.772473+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 262144 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 688718 data_alloc: 218103808 data_used: 122880
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.431213379s of 10.468804359s, submitted: 36
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 87 handle_osd_map epochs [87,88], i have 88, src has [1,88]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004425 3 0.000091
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.006327 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=53/54 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=53/53 les/c/f=54/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.002284 5 0.000547
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000234 1 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000286 1 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035383 2 0.000056
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 88 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:22.772610+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 1294336 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.965996 1 0.000151
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004553 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.010985 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.011014 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[53,87)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89 pruub=14.997830391s) [2] async=[2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 44'389 active pruub 171.500854492s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89 pruub=14.997679710s) [2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 171.500854492s@ mbc={}] exit Reset 0.000184 1 0.000260
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89 pruub=14.997679710s) [2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 171.500854492s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89 pruub=14.997679710s) [2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 171.500854492s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89 pruub=14.997679710s) [2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 171.500854492s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89 pruub=14.997679710s) [2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 171.500854492s@ mbc={}] exit Start 0.000044 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 89 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89 pruub=14.997679710s) [2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 171.500854492s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 89 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:23.772720+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:52.790353+0000 osd.0 (osd.0) 82 : cluster [DBG] 2.19 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:52.804441+0000 osd.0 (osd.0) 83 : cluster [DBG] 2.19 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 1228800 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 83) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:52.790353+0000 osd.0 (osd.0) 82 : cluster [DBG] 2.19 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:52.804441+0000 osd.0 (osd.0) 83 : cluster [DBG] 2.19 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007177 7 0.000204
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000057 1 0.000095
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] lb MIN local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=-1 lpr=89 DELETING pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038279 2 0.000196
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] lb MIN local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038390 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 90 pg[9.13( v 44'389 (0'0,44'389] lb MIN local-lis/les=87/88 n=5 ec=45/34 lis/c=87/53 les/c/f=88/54/0 sis=89) [2] r=-1 lpr=89 pi=[53,89)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.045708 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:24.772872+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 90 heartbeat osd_stat(store_statfs(0x4fcf1c000/0x0/0x4ffc00000, data 0x837f7/0x111000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 1171456 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:25.772992+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 1171456 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:26.773110+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:55.878552+0000 osd.0 (osd.0) 84 : cluster [DBG] 5.1e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:55.892582+0000 osd.0 (osd.0) 85 : cluster [DBG] 5.1e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1155072 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 692167 data_alloc: 218103808 data_used: 126976
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 85) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:55.878552+0000 osd.0 (osd.0) 84 : cluster [DBG] 5.1e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:55.892582+0000 osd.0 (osd.0) 85 : cluster [DBG] 5.1e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:27.773251+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:56.909997+0000 osd.0 (osd.0) 86 : cluster [DBG] 7.1b scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:56.924058+0000 osd.0 (osd.0) 87 : cluster [DBG] 7.1b scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1155072 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 87) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:56.909997+0000 osd.0 (osd.0) 86 : cluster [DBG] 7.1b scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:56.924058+0000 osd.0 (osd.0) 87 : cluster [DBG] 7.1b scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:28.773365+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:57.909543+0000 osd.0 (osd.0) 88 : cluster [DBG] 8.14 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:40:57.923580+0000 osd.0 (osd.0) 89 : cluster [DBG] 8.14 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1146880 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 89) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:57.909543+0000 osd.0 (osd.0) 88 : cluster [DBG] 8.14 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:40:57.923580+0000 osd.0 (osd.0) 89 : cluster [DBG] 8.14 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:29.773483+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1146880 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:30.773596+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 91 heartbeat osd_stat(store_statfs(0x4fcf19000/0x0/0x4ffc00000, data 0x85374/0x114000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1130496 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:31.773734+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:00.928181+0000 osd.0 (osd.0) 90 : cluster [DBG] 11.14 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:00.942031+0000 osd.0 (osd.0) 91 : cluster [DBG] 11.14 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1130496 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 698904 data_alloc: 218103808 data_used: 135168
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 91) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:00.928181+0000 osd.0 (osd.0) 90 : cluster [DBG] 11.14 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:00.942031+0000 osd.0 (osd.0) 91 : cluster [DBG] 11.14 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:32.773918+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:01.891622+0000 osd.0 (osd.0) 92 : cluster [DBG] 7.18 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:01.905728+0000 osd.0 (osd.0) 93 : cluster [DBG] 7.18 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.592417717s of 10.621297836s, submitted: 23
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1122304 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 93) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:01.891622+0000 osd.0 (osd.0) 92 : cluster [DBG] 7.18 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:01.905728+0000 osd.0 (osd.0) 93 : cluster [DBG] 7.18 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:33.774097+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:02.862092+0000 osd.0 (osd.0) 94 : cluster [DBG] 8.10 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:02.876203+0000 osd.0 (osd.0) 95 : cluster [DBG] 8.10 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 1097728 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 95) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:02.862092+0000 osd.0 (osd.0) 94 : cluster [DBG] 8.10 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:02.876203+0000 osd.0 (osd.0) 95 : cluster [DBG] 8.10 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 91 handle_osd_map epochs [92,93], i have 91, src has [1,93]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=0 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=0 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000022
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000467 1 0.000030
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 69.246103 114 0.000230
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 69.248279 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 70.255571 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] exit Started 70.255599 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 92 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92 pruub=10.754682541s) [1] r=-1 lpr=92 pi=[54,92)/1 crt=44'389 mlcod 0'0 active pruub 178.302368164s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000043 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.001285 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92 pruub=10.754209518s) [1] r=-1 lpr=92 pi=[54,92)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.302368164s@ mbc={}] exit Reset 0.000510 2 0.000575
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92 pruub=10.754209518s) [1] r=-1 lpr=92 pi=[54,92)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.302368164s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92 pruub=10.754209518s) [1] r=-1 lpr=92 pi=[54,92)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.302368164s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92 pruub=10.754209518s) [1] r=-1 lpr=92 pi=[54,92)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.302368164s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92 pruub=10.754209518s) [1] r=-1 lpr=92 pi=[54,92)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.302368164s@ mbc={}] exit Start 0.000110 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 93 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92 pruub=10.754209518s) [1] r=-1 lpr=92 pi=[54,92)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 178.302368164s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 93 handle_osd_map epochs [92,93], i have 93, src has [1,93]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf19000/0x0/0x4ffc00000, data 0x85374/0x114000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:34.774217+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 1007616 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=-1 lpr=92 pi=[54,92)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.001343 3 0.000272
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=-1 lpr=92 pi=[54,92)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.001621 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=92) [1] r=-1 lpr=92 pi=[54,92)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 94 handle_osd_map epochs [93,94], i have 94, src has [1,94]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.002585 2 0.000829
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.004134 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.004164 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=93) [0] r=0 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000982 1 0.001127
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000680 1 0.000971
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000093 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 94 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003114 2 0.000271
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000033 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 94 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 94 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:35.774310+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:05.758036+0000 osd.0 (osd.0) 96 : cluster [DBG] 11.1 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:05.772088+0000 osd.0 (osd.0) 97 : cluster [DBG] 11.1 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 991232 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002133 3 0.000125
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.005586 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.007719 6 0.000345
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 97) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:05.758036+0000 osd.0 (osd.0) 96 : cluster [DBG] 11.1 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:05.772088+0000 osd.0 (osd.0) 97 : cluster [DBG] 11.1 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 lc 40'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002629 3 0.000072
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 lc 40'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 lc 40'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000074 1 0.000317
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 lc 40'69 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028617 1 0.000027
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.086949 5 0.001815
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000090 1 0.000089
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000323 1 0.000326
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028350 2 0.000062
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 95 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:36.774463+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 958464 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720981 data_alloc: 218103808 data_used: 139264
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 95 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.976140 1 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.007736 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 2.015754 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=94) [0]/[2] r=-1 lpr=94 pi=[62,94)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000077 1 0.000109
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.894522 1 0.000230
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.010887 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.016510 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.016537 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000754 1 0.000831
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=15.074978828s) [1] async=[1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 44'389 active pruub 185.642929077s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001085 3 0.000036
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=15.073821068s) [1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.642929077s@ mbc={}] exit Reset 0.001660 1 0.001555
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=15.073821068s) [1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.642929077s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=15.073821068s) [1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.642929077s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=15.073821068s) [1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.642929077s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=15.073821068s) [1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.642929077s@ mbc={}] exit Start 0.000056 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 96 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=15.073821068s) [1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 185.642929077s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:37.774552+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fcf08000/0x0/0x4ffc00000, data 0x8da4d/0x124000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 942080 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 96 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005555 2 0.000061
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007459 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=94/62 les/c/f=95/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=96/62 les/c/f=97/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001844 4 0.002321
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=96/62 les/c/f=97/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=96/62 les/c/f=97/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.16( v 44'389 (0'0,44'389] local-lis/les=96/97 n=5 ec=45/34 lis/c=96/62 les/c/f=97/63/0 sis=96) [0] r=0 lpr=96 pi=[62,96)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.010785 7 0.000626
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000063 1 0.000065
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] lb MIN local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=-1 lpr=96 DELETING pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.031690 2 0.000173
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] lb MIN local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.031848 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 97 pg[9.15( v 44'389 (0'0,44'389] lb MIN local-lis/les=94/95 n=5 ec=45/34 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=-1 lpr=96 pi=[54,96)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.042767 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:38.774639+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 1 last_log 98 sent 97 num 1 unsent 1 sending 1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:08.773019+0000 osd.0 (osd.0) 98 : cluster [DBG] 7.1f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: handle_auth_request added challenge on 0x560330fb3800
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 933888 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 98) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:08.773019+0000 osd.0 (osd.0) 98 : cluster [DBG] 7.1f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:39.774776+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 1 last_log 99 sent 98 num 1 unsent 1 sending 1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:08.786881+0000 osd.0 (osd.0) 99 : cluster [DBG] 7.1f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 933888 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 99) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:08.786881+0000 osd.0 (osd.0) 99 : cluster [DBG] 7.1f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:40.774910+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:10.733617+0000 osd.0 (osd.0) 100 : cluster [DBG] 11.f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:10.747745+0000 osd.0 (osd.0) 101 : cluster [DBG] 11.f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 933888 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 101) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:10.733617+0000 osd.0 (osd.0) 100 : cluster [DBG] 11.f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:10.747745+0000 osd.0 (osd.0) 101 : cluster [DBG] 11.f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:41.775079+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:11.731369+0000 osd.0 (osd.0) 102 : cluster [DBG] 8.c scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:11.745286+0000 osd.0 (osd.0) 103 : cluster [DBG] 8.c scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 909312 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 725775 data_alloc: 218103808 data_used: 143360
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 77.287201 133 0.000265
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active 77.292030 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary 78.299763 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] exit Started 78.299792 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=44'389 mlcod 0'0 active mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99 pruub=10.713154793s) [2] r=-1 lpr=99 pi=[54,99)/1 crt=44'389 mlcod 0'0 active pruub 186.304855347s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99 pruub=10.712953568s) [2] r=-1 lpr=99 pi=[54,99)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.304855347s@ mbc={}] exit Reset 0.000240 1 0.000428
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99 pruub=10.712953568s) [2] r=-1 lpr=99 pi=[54,99)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.304855347s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99 pruub=10.712953568s) [2] r=-1 lpr=99 pi=[54,99)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.304855347s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99 pruub=10.712953568s) [2] r=-1 lpr=99 pi=[54,99)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.304855347s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99 pruub=10.712953568s) [2] r=-1 lpr=99 pi=[54,99)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.304855347s@ mbc={}] exit Start 0.000211 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 99 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99 pruub=10.712953568s) [2] r=-1 lpr=99 pi=[54,99)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 186.304855347s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 103) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:11.731369+0000 osd.0 (osd.0) 102 : cluster [DBG] 8.c scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:11.745286+0000 osd.0 (osd.0) 103 : cluster [DBG] 8.c scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 99 handle_osd_map epochs [98,99], i have 99, src has [1,99]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:42.775204+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:12.716818+0000 osd.0 (osd.0) 104 : cluster [DBG] 11.e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:12.730820+0000 osd.0 (osd.0) 105 : cluster [DBG] 11.e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 851968 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 99 heartbeat osd_stat(store_statfs(0x4fcf01000/0x0/0x4ffc00000, data 0x92c66/0x12c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.412063599s of 10.478116035s, submitted: 99
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=-1 lpr=99 pi=[54,99)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.005649 3 0.000322
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=-1 lpr=99 pi=[54,99)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.005913 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=99) [2] r=-1 lpr=99 pi=[54,99)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Reset 0.000055 1 0.000078
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped mbc={}] exit Start 0.000014 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 105) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:12.716818+0000 osd.0 (osd.0) 104 : cluster [DBG] 11.e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:12.730820+0000 osd.0 (osd.0) 105 : cluster [DBG] 11.e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002620 2 0.000039
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 100 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:43.775309+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 1 last_log 106 sent 105 num 1 unsent 1 sending 1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:13.766694+0000 osd.0 (osd.0) 106 : cluster [DBG] 8.e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 835584 heap: 66969600 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 100 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 106) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:13.766694+0000 osd.0 (osd.0) 106 : cluster [DBG] 8.e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001902 3 0.000098
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.004621 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=54/55 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=54/54 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.001887 5 0.000187
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000087 1 0.000057
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000351 1 0.000064
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.049525 2 0.000090
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 101 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:44.775473+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 1 last_log 107 sent 106 num 1 unsent 1 sending 1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:13.780790+0000 osd.0 (osd.0) 107 : cluster [DBG] 8.e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 794624 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 107) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:13.780790+0000 osd.0 (osd.0) 107 : cluster [DBG] 8.e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 101 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.962574 1 0.000075
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary/Active 1.014612 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started/Primary 2.019253 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] exit Started 2.019280 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[54,100)/1 crt=44'389 mlcod 44'389 active+remapped mbc={255={}}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102 pruub=14.987218857s) [2] async=[2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 44'389 active pruub 193.604431152s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102 pruub=14.987159729s) [2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 193.604431152s@ mbc={}] exit Reset 0.000084 1 0.000120
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102 pruub=14.987159729s) [2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 193.604431152s@ mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102 pruub=14.987159729s) [2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 193.604431152s@ mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102 pruub=14.987159729s) [2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 193.604431152s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102 pruub=14.987159729s) [2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 193.604431152s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 102 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102 pruub=14.987159729s) [2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY pruub 193.604431152s@ mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:45.775588+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 1 last_log 108 sent 107 num 1 unsent 1 sending 1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:15.766792+0000 osd.0 (osd.0) 108 : cluster [DBG] 11.17 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 778240 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 102 heartbeat osd_stat(store_statfs(0x4fcae8000/0x0/0x4ffc00000, data 0x97cb1/0x135000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 108) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:15.766792+0000 osd.0 (osd.0) 108 : cluster [DBG] 11.17 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:46.775737+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 1 last_log 109 sent 108 num 1 unsent 1 sending 1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:15.784469+0000 osd.0 (osd.0) 109 : cluster [DBG] 11.17 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _renew_subs
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.444250 6 0.000087
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000808 2 0.000037
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] lb MIN local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 DELETING pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.053340 2 0.000159
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] lb MIN local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.054264 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 103 pg[9.19( v 44'389 (0'0,44'389] lb MIN local-lis/les=100/101 n=5 ec=45/34 lis/c=100/54 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[54,102)/1 crt=44'389 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.498550 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 737280 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 732658 data_alloc: 218103808 data_used: 143360
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fcae8000/0x0/0x4ffc00000, data 0x97cb1/0x135000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 109) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:15.784469+0000 osd.0 (osd.0) 109 : cluster [DBG] 11.17 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:47.775877+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=0 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000050 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=0 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000099 1 0.000035
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000143 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:48.775972+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 729088 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.870758 2 0.000053
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.870931 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.870950 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=104) [0] r=0 lpr=104 pi=[78,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000080 1 0.000122
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000010 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:49.776075+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 745472 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.006320 6 0.000076
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=78/78 les/c/f=79/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 lc 40'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002039 3 0.000125
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 lc 40'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 lc 40'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000039 1 0.000045
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 lc 40'125 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049762 1 0.000023
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 106 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:50.776222+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.508207 1 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.560147 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 1.566514 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=105) [0]/[2] r=-1 lpr=105 pi=[78,105)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000672 1 0.000740
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000097 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 107 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002596 2 0.000202
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000942 2 0.000099
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 107 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 638976 heap: 68018176 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:51.776330+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 107 handle_osd_map epochs [107,108], i have 108, src has [1,108]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999011 2 0.000172
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002647 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=105/106 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=107/108 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=107/108 n=5 ec=45/34 lis/c=105/78 les/c/f=106/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=107/108 n=5 ec=45/34 lis/c=107/78 les/c/f=108/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001434 4 0.000362
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=107/108 n=5 ec=45/34 lis/c=107/78 les/c/f=108/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=107/108 n=5 ec=45/34 lis/c=107/78 les/c/f=108/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 108 pg[9.1c( v 44'389 (0'0,44'389] local-lis/les=107/108 n=5 ec=45/34 lis/c=107/78 les/c/f=108/79/0 sis=107) [0] r=0 lpr=107 pi=[78,107)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 1687552 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 759117 data_alloc: 218103808 data_used: 147456
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:52.776428+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 1687552 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 108 heartbeat osd_stat(store_statfs(0x4fcad6000/0x0/0x4ffc00000, data 0xa1e31/0x147000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.296848297s of 10.344698906s, submitted: 42
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:53.776529+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:23.684998+0000 osd.0 (osd.0) 110 : cluster [DBG] 8.f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:23.706166+0000 osd.0 (osd.0) 111 : cluster [DBG] 8.f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 111) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:23.684998+0000 osd.0 (osd.0) 110 : cluster [DBG] 8.f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:23.706166+0000 osd.0 (osd.0) 111 : cluster [DBG] 8.f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 1679360 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:54.776690+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 1679360 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:55.776808+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 1671168 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:56.776895+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:26.700671+0000 osd.0 (osd.0) 112 : cluster [DBG] 8.b scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:26.714800+0000 osd.0 (osd.0) 113 : cluster [DBG] 8.b scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 113) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:26.700671+0000 osd.0 (osd.0) 112 : cluster [DBG] 8.b scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:26.714800+0000 osd.0 (osd.0) 113 : cluster [DBG] 8.b scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 1671168 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 760531 data_alloc: 218103808 data_used: 147456
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:57.777020+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 1662976 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fcad7000/0x0/0x4ffc00000, data 0xa1e31/0x147000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e(unlocked)] enter Initial
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=0 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000064 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=0 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000042 1 0.000064
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000252 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000183 1 0.000349
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000057 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000323 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:58.777115+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.513033 2 0.000160
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.513403 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.513694 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=109) [0] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000060 1 0.000092
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 1687552 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:40:59.777216+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:29.718714+0000 osd.0 (osd.0) 114 : cluster [DBG] 8.9 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:29.732879+0000 osd.0 (osd.0) 115 : cluster [DBG] 8.9 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 115) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:29.718714+0000 osd.0 (osd.0) 114 : cluster [DBG] 8.9 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:29.732879+0000 osd.0 (osd.0) 115 : cluster [DBG] 8.9 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.125412 5 0.000090
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 lc 0'0 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=62/62 les/c/f=63/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 crt=44'389 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 lc 40'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001749 4 0.000124
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 lc 40'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 lc 40'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000117 1 0.000054
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 lc 40'220 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 luod=0'0 crt=44'389 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035507 1 0.000126
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 111 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 1589248 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:00.777362+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:30.751253+0000 osd.0 (osd.0) 116 : cluster [DBG] 7.6 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:30.765354+0000 osd.0 (osd.0) 117 : cluster [DBG] 7.6 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.809882 1 0.000026
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.847347 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] exit Started 1.972793 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[62,110)/1 luod=0'0 crt=44'389 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 luod=0'0 crt=44'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Reset 0.000040 1 0.000070
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Start
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000020 1 0.000025
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=0/0 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Nov 26 12:58:13 compute-0 ceph-osd[88362]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000733 3 0.000037
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 112 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 117) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:30.751253+0000 osd.0 (osd.0) 116 : cluster [DBG] 7.6 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:30.765354+0000 osd.0 (osd.0) 117 : cluster [DBG] 7.6 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 1572864 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 112 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xa6fb1/0x150000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:01.777510+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 112 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997845 2 0.000042
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998649 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=110/111 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=110/62 les/c/f=111/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/62 les/c/f=113/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001410 4 0.000130
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/62 les/c/f=113/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/62 les/c/f=113/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 pg_epoch: 113 pg[9.1e( v 44'389 (0'0,44'389] local-lis/les=112/113 n=5 ec=45/34 lis/c=112/62 les/c/f=113/63/0 sis=112) [0] r=0 lpr=112 pi=[62,112)/1 crt=44'389 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 1564672 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 788831 data_alloc: 218103808 data_used: 163840
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xa8a33/0x153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:02.777601+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 1556480 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:03.777719+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.385306358s of 10.422210693s, submitted: 42
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 1515520 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:04.777843+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 1507328 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:05.777931+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 1466368 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:06.778025+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 1433600 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794427 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac1000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:07.778126+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 1433600 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:08.778210+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 1433600 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:09.778308+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 1425408 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:10.778417+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 1425408 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:11.778509+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 1417216 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794427 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:12.778608+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 1417216 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac1000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:13.778793+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 1400832 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.656607628s of 10.661879539s, submitted: 4
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:14.778876+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 3 last_log 120 sent 117 num 3 unsent 3 sending 3
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:43.778882+0000 osd.0 (osd.0) 118 : cluster [DBG] 7.4 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:43.792768+0000 osd.0 (osd.0) 119 : cluster [DBG] 7.4 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:44.769078+0000 osd.0 (osd.0) 120 : cluster [DBG] 7.3 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 120) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:43.778882+0000 osd.0 (osd.0) 118 : cluster [DBG] 7.4 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:43.792768+0000 osd.0 (osd.0) 119 : cluster [DBG] 7.4 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:44.769078+0000 osd.0 (osd.0) 120 : cluster [DBG] 7.3 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 1400832 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:15.779007+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 1 last_log 121 sent 120 num 1 unsent 1 sending 1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:44.783209+0000 osd.0 (osd.0) 121 : cluster [DBG] 7.3 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 121) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:44.783209+0000 osd.0 (osd.0) 121 : cluster [DBG] 7.3 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 1400832 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:16.779140+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 1392640 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796989 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:17.779253+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:46.829587+0000 osd.0 (osd.0) 122 : cluster [DBG] 11.4 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:46.843787+0000 osd.0 (osd.0) 123 : cluster [DBG] 11.4 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 123) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:46.829587+0000 osd.0 (osd.0) 122 : cluster [DBG] 11.4 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:46.843787+0000 osd.0 (osd.0) 123 : cluster [DBG] 11.4 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 1392640 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:18.779386+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 1384448 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:19.779525+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:48.817184+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:48.831320+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 125) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:48.817184+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:48.831320+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 1384448 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:20.779708+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:49.788608+0000 osd.0 (osd.0) 126 : cluster [DBG] 7.9 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:49.802732+0000 osd.0 (osd.0) 127 : cluster [DBG] 7.9 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 127) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:49.788608+0000 osd.0 (osd.0) 126 : cluster [DBG] 7.9 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:49.802732+0000 osd.0 (osd.0) 127 : cluster [DBG] 7.9 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 1384448 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:21.779873+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 1376256 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 800431 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:22.779982+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:51.835630+0000 osd.0 (osd.0) 128 : cluster [DBG] 11.6 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:51.849746+0000 osd.0 (osd.0) 129 : cluster [DBG] 11.6 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 129) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:51.835630+0000 osd.0 (osd.0) 128 : cluster [DBG] 11.6 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:51.849746+0000 osd.0 (osd.0) 129 : cluster [DBG] 11.6 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 1376256 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:23.780119+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 1368064 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:24.780214+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:53.841855+0000 osd.0 (osd.0) 130 : cluster [DBG] 8.18 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:53.859543+0000 osd.0 (osd.0) 131 : cluster [DBG] 8.18 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.105971336s of 10.121919632s, submitted: 12
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 131) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:53.841855+0000 osd.0 (osd.0) 130 : cluster [DBG] 8.18 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:53.859543+0000 osd.0 (osd.0) 131 : cluster [DBG] 8.18 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 1368064 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:25.780370+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:54.890989+0000 osd.0 (osd.0) 132 : cluster [DBG] 8.1f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:54.905162+0000 osd.0 (osd.0) 133 : cluster [DBG] 8.1f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 133) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:54.890989+0000 osd.0 (osd.0) 132 : cluster [DBG] 8.1f scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:54.905162+0000 osd.0 (osd.0) 133 : cluster [DBG] 8.1f scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 1359872 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:26.780548+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 1351680 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802727 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:27.780678+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 1343488 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:28.780820+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 1327104 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:29.780977+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:59.054443+0000 osd.0 (osd.0) 134 : cluster [DBG] 8.1d scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:41:59.068287+0000 osd.0 (osd.0) 135 : cluster [DBG] 8.1d scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 135) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:59.054443+0000 osd.0 (osd.0) 134 : cluster [DBG] 8.1d scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:41:59.068287+0000 osd.0 (osd.0) 135 : cluster [DBG] 8.1d scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 1327104 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:30.781329+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:00.008900+0000 osd.0 (osd.0) 136 : cluster [DBG] 7.13 deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:00.022943+0000 osd.0 (osd.0) 137 : cluster [DBG] 7.13 deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 137) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:00.008900+0000 osd.0 (osd.0) 136 : cluster [DBG] 7.13 deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:00.022943+0000 osd.0 (osd.0) 137 : cluster [DBG] 7.13 deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 1318912 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:31.781515+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 1318912 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805023 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:32.781646+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 1302528 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:33.781785+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:03.033540+0000 osd.0 (osd.0) 138 : cluster [DBG] 8.1a scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:03.047700+0000 osd.0 (osd.0) 139 : cluster [DBG] 8.1a scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 139) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:03.033540+0000 osd.0 (osd.0) 138 : cluster [DBG] 8.1a scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:03.047700+0000 osd.0 (osd.0) 139 : cluster [DBG] 8.1a scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 1294336 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:34.781949+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 1294336 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:35.782048+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.210412979s of 11.222543716s, submitted: 8
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 1294336 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:36.782159+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:06.113605+0000 osd.0 (osd.0) 140 : cluster [DBG] 11.19 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:06.127683+0000 osd.0 (osd.0) 141 : cluster [DBG] 11.19 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 141) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:06.113605+0000 osd.0 (osd.0) 140 : cluster [DBG] 11.19 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:06.127683+0000 osd.0 (osd.0) 141 : cluster [DBG] 11.19 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 1286144 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807320 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:37.782319+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 1286144 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:38.782418+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:08.121163+0000 osd.0 (osd.0) 142 : cluster [DBG] 11.10 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:08.135158+0000 osd.0 (osd.0) 143 : cluster [DBG] 11.10 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 143) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:08.121163+0000 osd.0 (osd.0) 142 : cluster [DBG] 11.10 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:08.135158+0000 osd.0 (osd.0) 143 : cluster [DBG] 11.10 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 1269760 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:39.782533+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:09.072602+0000 osd.0 (osd.0) 144 : cluster [DBG] 8.6 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:09.086584+0000 osd.0 (osd.0) 145 : cluster [DBG] 8.6 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.d deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.d deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 145) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:09.072602+0000 osd.0 (osd.0) 144 : cluster [DBG] 8.6 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:09.086584+0000 osd.0 (osd.0) 145 : cluster [DBG] 8.6 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 1269760 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:40.782818+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:10.108494+0000 osd.0 (osd.0) 146 : cluster [DBG] 10.d deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:10.126168+0000 osd.0 (osd.0) 147 : cluster [DBG] 10.d deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 147) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:10.108494+0000 osd.0 (osd.0) 146 : cluster [DBG] 10.d deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:10.126168+0000 osd.0 (osd.0) 147 : cluster [DBG] 10.d deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 1261568 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:41.782933+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 1261568 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810764 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:42.783032+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 1261568 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:43.783124+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 1261568 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:44.783271+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 1261568 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:45.783425+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:15.085505+0000 osd.0 (osd.0) 148 : cluster [DBG] 10.7 deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:15.099620+0000 osd.0 (osd.0) 149 : cluster [DBG] 10.7 deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.992955208s of 10.004768372s, submitted: 10
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 149) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:15.085505+0000 osd.0 (osd.0) 148 : cluster [DBG] 10.7 deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:15.099620+0000 osd.0 (osd.0) 149 : cluster [DBG] 10.7 deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 1253376 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:46.783577+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:16.118459+0000 osd.0 (osd.0) 150 : cluster [DBG] 10.4 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:16.132535+0000 osd.0 (osd.0) 151 : cluster [DBG] 10.4 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 151) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:16.118459+0000 osd.0 (osd.0) 150 : cluster [DBG] 10.4 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:16.132535+0000 osd.0 (osd.0) 151 : cluster [DBG] 10.4 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 1253376 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 813060 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:47.783785+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 1245184 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:48.783913+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 1245184 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:49.784008+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 1236992 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:50.784125+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:20.039946+0000 osd.0 (osd.0) 152 : cluster [DBG] 10.8 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:20.054074+0000 osd.0 (osd.0) 153 : cluster [DBG] 10.8 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 153) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:20.039946+0000 osd.0 (osd.0) 152 : cluster [DBG] 10.8 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:20.054074+0000 osd.0 (osd.0) 153 : cluster [DBG] 10.8 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 1228800 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:51.784249+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 1228800 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 814208 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:52.784361+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 1220608 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:53.784459+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 1220608 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:54.784550+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 1220608 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:55.784659+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:24.960887+0000 osd.0 (osd.0) 154 : cluster [DBG] 10.e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:24.978618+0000 osd.0 (osd.0) 155 : cluster [DBG] 10.e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 155) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:24.960887+0000 osd.0 (osd.0) 154 : cluster [DBG] 10.e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:24.978618+0000 osd.0 (osd.0) 155 : cluster [DBG] 10.e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 1212416 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:56.784783+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 1212416 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815356 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:57.784877+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 1204224 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:58.785023+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 1196032 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:41:59.785150+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1187840 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:00.785333+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1187840 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:01.785422+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1187840 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815356 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:02.785700+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.859439850s of 16.866828918s, submitted: 6
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1187840 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:03.785794+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:32.985377+0000 osd.0 (osd.0) 156 : cluster [DBG] 10.1 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:32.999461+0000 osd.0 (osd.0) 157 : cluster [DBG] 10.1 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 157) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:32.985377+0000 osd.0 (osd.0) 156 : cluster [DBG] 10.1 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:32.999461+0000 osd.0 (osd.0) 157 : cluster [DBG] 10.1 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 1179648 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:04.785946+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 1179648 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:05.786077+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 1171456 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:06.786199+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 1171456 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 816504 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:07.786336+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 1163264 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:08.786475+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 1163264 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:09.786596+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 1163264 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:10.786733+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 1163264 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:11.786842+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 1146880 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 816504 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:12.786973+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 1146880 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:13.787075+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 1138688 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:14.787169+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.946941376s of 11.949298859s, submitted: 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 1138688 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:15.787261+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:44.934610+0000 osd.0 (osd.0) 158 : cluster [DBG] 10.1e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:44.948851+0000 osd.0 (osd.0) 159 : cluster [DBG] 10.1e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 159) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:44.934610+0000 osd.0 (osd.0) 158 : cluster [DBG] 10.1e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:44.948851+0000 osd.0 (osd.0) 159 : cluster [DBG] 10.1e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 1122304 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:16.787442+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 1130496 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 818802 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:17.787570+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:46.918215+0000 osd.0 (osd.0) 160 : cluster [DBG] 10.15 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:46.935902+0000 osd.0 (osd.0) 161 : cluster [DBG] 10.15 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 161) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:46.918215+0000 osd.0 (osd.0) 160 : cluster [DBG] 10.15 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:46.935902+0000 osd.0 (osd.0) 161 : cluster [DBG] 10.15 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 1105920 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:18.787721+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:47.869028+0000 osd.0 (osd.0) 162 : cluster [DBG] 10.9 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:47.886779+0000 osd.0 (osd.0) 163 : cluster [DBG] 10.9 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 163) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:47.869028+0000 osd.0 (osd.0) 162 : cluster [DBG] 10.9 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:47.886779+0000 osd.0 (osd.0) 163 : cluster [DBG] 10.9 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 1105920 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:19.787876+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:48.902694+0000 osd.0 (osd.0) 164 : cluster [DBG] 10.16 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:48.916632+0000 osd.0 (osd.0) 165 : cluster [DBG] 10.16 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 165) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:48.902694+0000 osd.0 (osd.0) 164 : cluster [DBG] 10.16 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:48.916632+0000 osd.0 (osd.0) 165 : cluster [DBG] 10.16 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 1097728 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:20.788017+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 1097728 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:21.788117+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 1097728 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 821099 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:22.788214+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 1089536 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:23.788319+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 1081344 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:24.788453+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:53.818681+0000 osd.0 (osd.0) 166 : cluster [DBG] 10.17 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:53.832895+0000 osd.0 (osd.0) 167 : cluster [DBG] 10.17 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 167) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:53.818681+0000 osd.0 (osd.0) 166 : cluster [DBG] 10.17 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:53.832895+0000 osd.0 (osd.0) 167 : cluster [DBG] 10.17 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 1073152 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:25.788592+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 1073152 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:26.788694+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 1073152 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 822248 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:27.788828+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 1064960 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.748352051s of 13.758462906s, submitted: 10
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:28.788956+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:58.693355+0000 osd.0 (osd.0) 168 : cluster [DBG] 9.11 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:58.724957+0000 osd.0 (osd.0) 169 : cluster [DBG] 9.11 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 169) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:58.693355+0000 osd.0 (osd.0) 168 : cluster [DBG] 9.11 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:58.724957+0000 osd.0 (osd.0) 169 : cluster [DBG] 9.11 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 1064960 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.5 deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.5 deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:29.789148+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:59.738249+0000 osd.0 (osd.0) 170 : cluster [DBG] 9.5 deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:42:59.777082+0000 osd.0 (osd.0) 171 : cluster [DBG] 9.5 deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 171) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:59.738249+0000 osd.0 (osd.0) 170 : cluster [DBG] 9.5 deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:42:59.777082+0000 osd.0 (osd.0) 171 : cluster [DBG] 9.5 deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 1040384 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:30.789395+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 1032192 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:31.789548+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 1032192 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824543 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:32.789669+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:02.726020+0000 osd.0 (osd.0) 172 : cluster [DBG] 9.b scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:02.754251+0000 osd.0 (osd.0) 173 : cluster [DBG] 9.b scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 173) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:02.726020+0000 osd.0 (osd.0) 172 : cluster [DBG] 9.b scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:02.754251+0000 osd.0 (osd.0) 173 : cluster [DBG] 9.b scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 1024000 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:33.789836+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:03.703310+0000 osd.0 (osd.0) 174 : cluster [DBG] 9.9 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:03.734269+0000 osd.0 (osd.0) 175 : cluster [DBG] 9.9 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 175) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:03.703310+0000 osd.0 (osd.0) 174 : cluster [DBG] 9.9 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:03.734269+0000 osd.0 (osd.0) 175 : cluster [DBG] 9.9 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 1007616 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:34.790014+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 1007616 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:35.790151+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 999424 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:36.790267+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:06.689787+0000 osd.0 (osd.0) 176 : cluster [DBG] 6.3 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:06.710919+0000 osd.0 (osd.0) 177 : cluster [DBG] 6.3 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 177) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:06.689787+0000 osd.0 (osd.0) 176 : cluster [DBG] 6.3 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:06.710919+0000 osd.0 (osd.0) 177 : cluster [DBG] 6.3 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 999424 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827984 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:37.790431+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 991232 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:38.790573+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 999424 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:39.790731+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 991232 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:40.790970+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 991232 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:41.791154+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 991232 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827984 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:42.791314+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 983040 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.031112671s of 15.047925949s, submitted: 10
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:43.791438+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:13.741043+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.1 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:13.779810+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.1 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 179) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:13.741043+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.1 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:13.779810+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.1 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 983040 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:44.791591+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 983040 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:45.791720+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 974848 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:46.792576+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 974848 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 829131 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:47.792695+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 966656 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:48.792840+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:17.852444+0000 osd.0 (osd.0) 180 : cluster [DBG] 9.d scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:17.895369+0000 osd.0 (osd.0) 181 : cluster [DBG] 9.d scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 181) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:17.852444+0000 osd.0 (osd.0) 180 : cluster [DBG] 9.d scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:17.895369+0000 osd.0 (osd.0) 181 : cluster [DBG] 9.d scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 958464 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:49.793041+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:18.832120+0000 osd.0 (osd.0) 182 : cluster [DBG] 9.3 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:18.874404+0000 osd.0 (osd.0) 183 : cluster [DBG] 9.3 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 183) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:18.832120+0000 osd.0 (osd.0) 182 : cluster [DBG] 9.3 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:18.874404+0000 osd.0 (osd.0) 183 : cluster [DBG] 9.3 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 958464 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:50.793535+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 958464 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:51.793699+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 950272 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831425 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:52.793854+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1b deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1b deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 942080 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:53.794015+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:22.905210+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.1b deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:22.926448+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.1b deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.138696671s of 10.151112556s, submitted: 8
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 185) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:22.905210+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.1b deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:22.926448+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.1b deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 933888 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:54.794247+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:23.892044+0000 osd.0 (osd.0) 186 : cluster [DBG] 6.7 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:23.909724+0000 osd.0 (osd.0) 187 : cluster [DBG] 6.7 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 187) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:23.892044+0000 osd.0 (osd.0) 186 : cluster [DBG] 6.7 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:23.909724+0000 osd.0 (osd.0) 187 : cluster [DBG] 6.7 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 933888 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:55.794482+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 925696 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:56.794638+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 925696 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833720 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:57.794804+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 925696 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:58.794934+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 917504 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:42:59.795070+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 917504 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:00.796647+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 917504 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:01.797464+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 901120 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833720 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:02.797637+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 901120 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:03.798504+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1d deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1d deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 892928 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:04.798691+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:33.824240+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.1d deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:33.856019+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.1d deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.919983864s of 10.925888062s, submitted: 4
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 189) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:33.824240+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.1d deep-scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:33.856019+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.1d deep-scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 876544 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:05.798921+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:34.818132+0000 osd.0 (osd.0) 190 : cluster [DBG] 6.5 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:34.839368+0000 osd.0 (osd.0) 191 : cluster [DBG] 6.5 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 191) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:34.818132+0000 osd.0 (osd.0) 190 : cluster [DBG] 6.5 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:34.839368+0000 osd.0 (osd.0) 191 : cluster [DBG] 6.5 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 876544 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:06.799140+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 868352 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836015 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:07.799286+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 860160 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:08.799448+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 860160 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:09.799582+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 851968 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:10.799722+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:39.861459+0000 osd.0 (osd.0) 192 : cluster [DBG] 6.9 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:39.875556+0000 osd.0 (osd.0) 193 : cluster [DBG] 6.9 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 193) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:39.861459+0000 osd.0 (osd.0) 192 : cluster [DBG] 6.9 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:39.875556+0000 osd.0 (osd.0) 193 : cluster [DBG] 6.9 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 843776 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:11.799888+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 835584 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837162 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:12.800016+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 835584 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:13.800169+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 835584 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:14.800319+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.015413284s of 10.020649910s, submitted: 4
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 811008 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:15.800499+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:44.838745+0000 osd.0 (osd.0) 194 : cluster [DBG] 6.a scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:44.852874+0000 osd.0 (osd.0) 195 : cluster [DBG] 6.a scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 195) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:44.838745+0000 osd.0 (osd.0) 194 : cluster [DBG] 6.a scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:44.852874+0000 osd.0 (osd.0) 195 : cluster [DBG] 6.a scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 802816 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:16.800727+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:45.851607+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.16 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:45.879942+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.16 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 197) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:45.851607+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.16 scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:45.879942+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.16 scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 802816 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 839457 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:17.800948+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 794624 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:18.801069+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:47.866301+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1c scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:47.905246+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1c scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 199) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:47.866301+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1c scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:47.905246+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1c scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 794624 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:19.801222+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 786432 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:20.801360+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 786432 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:21.801480+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 786432 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840605 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:22.801617+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 786432 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:23.801767+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 778240 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:24.801912+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 778240 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:25.802048+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.100176811s of 11.107900620s, submitted: 6
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 770048 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:26.802205+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:55.946811+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  will send 2025-11-26T12:43:55.978529+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client handle_log_ack log(last 201) v1
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:55.946811+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1e scrub starts
Nov 26 12:58:13 compute-0 ceph-osd[88362]: log_client  logged 2025-11-26T12:43:55.978529+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1e scrub ok
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 761856 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:27.802403+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 770048 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:28.802516+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 761856 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:29.802635+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 761856 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:30.802824+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 753664 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:31.802947+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 753664 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:32.803061+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 753664 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:33.803173+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 745472 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:34.803290+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 745472 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:35.803402+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 729088 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:36.803525+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14547 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 729088 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:37.803659+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 729088 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:38.803811+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 720896 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:39.803976+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 720896 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:40.804157+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 720896 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:41.804332+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 712704 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:42.804481+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 712704 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:43.804613+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 712704 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:44.804736+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 704512 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:45.805299+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 704512 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:46.805489+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 696320 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:47.805658+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68378624 unmapped: 688128 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:48.805830+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 679936 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:49.806033+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 679936 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:50.806250+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 679936 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:51.806414+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 671744 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:52.806526+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 679936 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:53.806626+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 679936 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:54.807812+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 679936 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:55.807922+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 671744 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:56.808020+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 671744 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:57.808128+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 663552 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:58.808260+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 663552 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:43:59.808379+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 663552 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:00.808504+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68411392 unmapped: 655360 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:01.808638+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68411392 unmapped: 655360 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:02.808791+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 647168 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:03.808909+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 647168 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:04.809026+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 647168 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:05.809265+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 638976 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:06.809382+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 638976 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:07.809500+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 638976 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:08.809612+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 630784 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:09.809731+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 630784 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:10.809891+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 630784 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:11.810003+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 630784 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:12.810114+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 622592 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:13.810228+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 622592 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:14.810338+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 614400 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:15.810450+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 614400 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:16.810574+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 606208 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:17.810679+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 606208 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:18.810811+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 606208 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:19.810920+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 598016 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:20.811037+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 598016 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:21.811132+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 589824 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:22.811268+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 589824 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:23.811375+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 589824 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:24.811516+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 581632 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:25.811644+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 581632 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:26.812200+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 573440 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:27.812313+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 573440 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:28.812420+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 573440 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:29.812530+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 573440 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:30.812666+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 565248 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:31.812806+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 565248 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:32.812910+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 565248 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:33.813011+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 557056 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:34.813126+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 557056 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:35.813243+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 548864 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:36.813348+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 548864 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:37.813461+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 548864 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:38.813579+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 540672 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:39.813719+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 540672 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:40.813886+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 532480 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:41.813989+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 532480 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:42.814115+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 524288 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:43.814228+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 516096 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:44.814394+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 516096 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:45.814538+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 516096 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:46.814662+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 507904 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:47.814790+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 516096 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:48.814908+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 516096 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:49.815012+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 507904 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:50.815167+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 507904 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:51.815268+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 499712 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:52.815398+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 499712 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:53.815501+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 491520 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:54.815601+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 491520 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:55.815699+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 475136 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:56.815829+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 466944 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:57.815962+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 466944 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:58.816064+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 458752 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:44:59.816193+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 458752 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:00.816326+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 458752 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:01.816433+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 450560 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:02.816534+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 450560 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:03.816631+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 450560 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:04.816750+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 442368 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:05.816886+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 442368 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:06.816986+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 442368 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:07.817095+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 434176 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:08.817201+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 434176 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:09.817308+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 434176 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:10.817447+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 425984 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:11.817551+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 425984 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:12.817656+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 417792 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:13.817778+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 417792 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:14.817873+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 417792 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:15.817978+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 409600 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:16.818089+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 409600 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:17.818192+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 401408 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:18.818316+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 401408 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:19.818420+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 401408 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:20.818545+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 401408 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:21.819093+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 393216 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:22.819652+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 393216 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:23.819751+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 385024 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:24.819872+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 385024 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:25.819979+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 385024 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:26.820137+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 376832 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:27.820238+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 376832 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:28.820330+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 368640 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:29.820432+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 368640 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:30.820575+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 360448 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:31.820683+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 360448 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:32.820815+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 360448 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:33.820948+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 352256 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:34.821065+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 352256 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:35.821250+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 352256 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:36.821382+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 352256 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:37.821490+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 344064 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:38.821592+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 344064 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:39.821687+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 335872 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:40.821784+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 335872 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:41.821905+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 335872 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:42.822038+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:43.822175+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 327680 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:44.822314+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 327680 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:45.822473+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 327680 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:46.822574+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 319488 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:47.822672+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 319488 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:48.822813+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 294912 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:49.822909+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 294912 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:50.823021+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 286720 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:51.823121+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 286720 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:52.823232+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 286720 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:53.823330+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 294912 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:54.823593+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 286720 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:55.823687+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 286720 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:56.823788+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 278528 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:57.823884+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 278528 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:58.823992+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 270336 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:45:59.824107+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 262144 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:00.824233+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 262144 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:01.824379+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 262144 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:02.824473+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 253952 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:03.824604+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 253952 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:04.824714+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 253952 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:05.824822+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 245760 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:06.824927+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 245760 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:07.825026+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 237568 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:08.825116+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 229376 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:09.825215+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 229376 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:10.825356+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 221184 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:11.825499+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 212992 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:12.825636+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 221184 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:13.825739+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 212992 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:14.825801+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 212992 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:15.825892+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 204800 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:16.825993+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 204800 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:17.826096+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 196608 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:18.826185+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 196608 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:19.826283+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 196608 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:20.826390+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 188416 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:21.826477+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 188416 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:22.826577+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 188416 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:23.826666+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68886528 unmapped: 180224 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:24.826770+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68886528 unmapped: 180224 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:25.826867+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68894720 unmapped: 172032 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:26.826975+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68886528 unmapped: 180224 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:27.827072+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68886528 unmapped: 180224 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:28.827175+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68902912 unmapped: 163840 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:29.827274+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68902912 unmapped: 163840 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:30.827395+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68911104 unmapped: 155648 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:31.827543+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68911104 unmapped: 155648 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:32.827665+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68911104 unmapped: 155648 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:33.827835+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68911104 unmapped: 155648 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:34.827963+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68919296 unmapped: 147456 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:35.828127+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68919296 unmapped: 147456 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:36.828229+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68919296 unmapped: 147456 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:37.828351+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68927488 unmapped: 139264 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:38.828461+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68927488 unmapped: 139264 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:39.828575+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 131072 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:40.828705+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 131072 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:41.828814+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68943872 unmapped: 122880 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:42.828904+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68943872 unmapped: 122880 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:43.829000+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68943872 unmapped: 122880 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:44.829091+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68952064 unmapped: 114688 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:45.829190+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68952064 unmapped: 114688 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:46.829285+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68952064 unmapped: 114688 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:47.829397+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68952064 unmapped: 114688 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:48.829500+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 106496 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:49.829608+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 106496 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:50.829740+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 98304 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:51.829873+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 98304 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:52.829994+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 90112 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:53.830095+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68984832 unmapped: 81920 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:54.830202+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68984832 unmapped: 81920 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:55.830325+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68984832 unmapped: 81920 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:56.830429+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 73728 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:57.830526+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 73728 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:58.830626+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 73728 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:46:59.830746+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 65536 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:00.830918+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 65536 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:01.831881+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 65536 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:02.832019+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 57344 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:03.832112+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 57344 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:04.832282+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 57344 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:05.832410+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 49152 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:06.832533+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 49152 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:07.832665+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 40960 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:08.832805+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 40960 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:09.832968+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 40960 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:10.833126+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 32768 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:11.833244+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 32768 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:12.833374+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 32768 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:13.833497+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 16384 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:14.833624+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 16384 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:15.833767+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 8192 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:16.833908+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 0 heap: 69066752 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:17.834074+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1040384 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:18.835483+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1032192 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:19.835606+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1032192 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:20.836072+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1032192 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:21.836207+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1024000 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:22.836321+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1024000 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:23.836444+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1024000 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:24.836566+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1015808 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:25.836692+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1015808 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:26.836809+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1007616 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:27.836993+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1007616 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:28.837115+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69115904 unmapped: 999424 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:29.837246+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69115904 unmapped: 999424 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:30.837385+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69115904 unmapped: 999424 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:31.837499+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 991232 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:32.837638+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 991232 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:33.837775+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 991232 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:34.837897+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 983040 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:35.838002+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 983040 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:36.838118+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 983040 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:37.838226+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 983040 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:38.838385+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 966656 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:39.838507+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 966656 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:40.838647+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 958464 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:41.838772+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 958464 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:42.838880+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 958464 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:43.838988+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 950272 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:44.839087+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 950272 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:45.839203+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 942080 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5487 writes, 23K keys, 5487 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5487 writes, 835 syncs, 6.57 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5487 writes, 23K keys, 5487 commit groups, 1.0 writes per commit group, ingest: 18.42 MB, 0.03 MB/s
                                           Interval WAL: 5487 writes, 835 syncs, 6.57 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:46.839305+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 884736 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:47.839461+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 884736 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:48.839580+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69238784 unmapped: 876544 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:49.839702+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69238784 unmapped: 876544 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:50.839822+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 868352 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:51.839948+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 868352 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:52.840077+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 868352 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:53.840231+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 860160 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:54.840364+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 860160 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:55.840526+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 860160 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:56.840749+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69263360 unmapped: 851968 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:57.840908+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69263360 unmapped: 851968 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:58.841043+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69263360 unmapped: 851968 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:47:59.841178+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 843776 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:00.841332+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 843776 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:01.841511+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 835584 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:02.843178+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 835584 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:03.843335+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 835584 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:04.843472+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 827392 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:05.843593+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 827392 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:06.843725+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 819200 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:07.843803+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 819200 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:08.843917+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 835584 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:09.844039+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 827392 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:10.844197+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 827392 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:11.844323+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 827392 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:12.844483+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 819200 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:13.844596+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 819200 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:14.844716+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69304320 unmapped: 811008 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:15.844850+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69304320 unmapped: 811008 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:16.844973+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69312512 unmapped: 802816 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:17.845104+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69312512 unmapped: 802816 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:18.845219+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 794624 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:19.845324+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 786432 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:20.845450+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 786432 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:21.845555+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69337088 unmapped: 778240 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:22.845663+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 770048 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:23.845802+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 770048 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:24.845927+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 761856 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:25.846038+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 761856 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:26.846153+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 761856 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:27.846269+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 753664 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:28.846407+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 753664 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:29.846529+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 753664 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:30.846673+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 745472 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:31.846785+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 745472 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:32.846897+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 737280 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:33.847010+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69386240 unmapped: 729088 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:34.847113+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69386240 unmapped: 729088 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:35.847214+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69394432 unmapped: 720896 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:36.847322+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 69394432 unmapped: 720896 heap: 70115328 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 311.620971680s of 311.623168945s, submitted: 2
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:37.847426+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 139264 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:38.847542+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 139264 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:39.847742+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 131072 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:40.847936+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 131072 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:41.848067+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 131072 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:42.848167+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 122880 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:43.848294+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 122880 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:44.848475+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 114688 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:45.848584+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 114688 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:46.848708+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 114688 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:47.848811+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 106496 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:48.848929+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 106496 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:49.849049+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 98304 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:50.849214+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 98304 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:51.849365+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 98304 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:52.849523+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 90112 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:53.850200+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 90112 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:54.850354+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 81920 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:55.850500+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 81920 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:56.850608+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 73728 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:57.850717+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 73728 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:58.850804+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 73728 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:48:59.850956+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 65536 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:00.851095+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 65536 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:01.851202+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 65536 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:02.851310+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 65536 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:03.851416+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 65536 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:04.851749+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 57344 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:05.851880+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 57344 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:06.851984+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 49152 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:07.852144+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 40960 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:08.852248+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 40960 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:09.852583+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 32768 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:10.853465+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 32768 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:11.853577+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 32768 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:12.854429+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 24576 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:13.854525+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 24576 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:14.854626+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 24576 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:15.854835+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16384 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:16.854958+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16384 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:17.855065+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 8192 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:18.855166+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 8192 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:19.855261+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 8192 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:20.855399+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 0 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:21.855519+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 0 heap: 71163904 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:22.855631+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 1040384 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:23.855739+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 1040384 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:24.855795+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 1040384 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:25.855927+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 1032192 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:26.856043+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 1032192 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:27.856142+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 1024000 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:28.856239+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 1024000 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:29.856388+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 1024000 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:30.856522+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 1015808 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:31.856622+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 1015808 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:32.856736+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 1015808 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:33.856800+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 999424 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:34.856899+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 999424 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:35.857026+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 991232 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:36.857144+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 991232 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:37.857262+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 983040 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:38.857384+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 974848 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:39.857491+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 966656 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:40.857625+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 966656 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:41.857743+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 966656 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:42.857874+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 958464 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:43.857973+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 958464 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:44.858076+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 958464 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:45.858175+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 950272 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:46.858316+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 950272 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:47.858447+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 950272 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:48.858547+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 950272 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:49.858655+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 950272 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:50.858790+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 950272 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:51.858895+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 942080 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:52.859057+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 942080 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:53.859184+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 942080 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:54.859284+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 942080 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:55.859428+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 942080 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:56.859585+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 942080 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:57.859794+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 942080 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:58.859912+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 942080 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:49:59.860007+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 942080 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:00.860167+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 942080 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:01.860265+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 942080 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:02.860374+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 942080 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:03.860485+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 942080 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:04.860576+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 942080 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:05.860675+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 925696 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:06.860790+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 925696 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:07.860896+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 925696 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:08.861007+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 925696 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:09.861108+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 925696 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:10.861241+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 925696 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:11.861380+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 925696 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:12.861497+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 917504 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:13.861609+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 917504 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:14.861739+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 917504 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:15.861794+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 917504 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:16.861889+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 901120 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:17.861983+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 901120 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:18.862120+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 901120 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:19.862245+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 901120 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:20.862390+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 901120 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:21.862494+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 901120 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:22.862593+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 901120 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:23.862690+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 901120 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:24.862796+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 901120 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:25.862893+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 901120 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:26.863000+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 901120 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:27.863105+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 901120 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:28.863225+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 901120 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:29.863359+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 901120 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:30.863471+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 892928 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:31.863567+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 892928 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:32.863683+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 892928 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:33.863810+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 892928 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:34.863915+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 892928 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:35.864018+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 892928 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:36.864096+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:37.864189+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:38.864273+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:39.864362+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:40.864481+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:41.864571+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:42.864665+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:43.864775+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:44.864942+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 860160 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:45.865178+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 860160 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:46.865289+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 860160 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:47.865393+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 860160 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:48.865526+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 860160 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:49.865662+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 860160 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:50.865807+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 860160 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:51.865923+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 860160 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:52.866031+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 851968 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:53.866131+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 851968 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:54.866287+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 851968 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:55.866386+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 851968 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:56.866514+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:57.866652+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:58.866794+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:50:59.866970+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:00.867163+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:01.867302+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:02.867430+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:03.867574+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:04.867683+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:05.867787+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:06.867913+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:07.868019+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:08.868146+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:09.869456+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:10.869597+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:11.869715+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:12.869916+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:13.870021+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:14.870117+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 811008 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:15.870214+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 811008 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:16.870338+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 794624 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:17.870465+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 794624 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:18.870594+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 794624 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:19.870722+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 794624 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:20.870888+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 794624 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:21.871016+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 794624 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:22.871117+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 786432 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:23.871217+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 786432 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:24.871319+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 786432 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:25.871430+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 786432 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:26.871545+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 786432 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:27.871664+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 786432 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:28.871789+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 786432 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:29.871905+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 786432 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:30.872063+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 786432 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:31.872187+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 786432 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:32.872314+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 786432 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:33.872444+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 786432 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:34.872595+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 786432 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:35.872718+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 786432 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:36.872886+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 770048 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:37.873022+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 770048 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:38.873159+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 770048 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:39.873293+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 770048 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:40.873416+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 770048 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:41.873534+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:42.873645+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:43.873746+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:44.873887+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:45.873986+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:46.874124+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:47.874224+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:48.874336+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:49.874434+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:50.874579+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:51.874728+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:52.874793+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:53.874908+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:54.875038+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 868352 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:55.875142+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 860160 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:56.875251+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 860160 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:57.875381+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 860160 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:58.875504+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 860160 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:51:59.875645+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 860160 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:00.875811+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 860160 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:01.875928+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 860160 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:02.876539+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 851968 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:03.876677+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 851968 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:04.876797+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 851968 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:05.876938+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:06.877062+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:07.877227+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:08.877384+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:09.877538+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:10.877706+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:11.877802+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:12.877928+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:13.878086+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:14.878226+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:15.878366+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:16.878823+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:17.879585+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:18.879737+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:19.879870+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:20.880041+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:21.880156+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:22.880288+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:23.880418+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:24.880574+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:25.880730+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:26.880879+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:27.881045+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:28.881166+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:29.881271+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 835584 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:30.881410+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:31.881571+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:32.881735+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:33.881897+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:34.882021+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:35.882139+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:36.882296+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 819200 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:37.882426+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 811008 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:38.882563+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 811008 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:39.882690+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 811008 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:40.882845+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 811008 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:41.882973+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 802816 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:42.883136+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 802816 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:43.883267+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 802816 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:44.883479+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 802816 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:45.883694+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 802816 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:46.883881+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 802816 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:47.884022+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 794624 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:48.884179+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 794624 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:49.884343+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 794624 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:50.884549+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 778240 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:51.884669+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 778240 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:52.884775+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:53.884875+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 778240 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 ms_handle_reset con 0x56033070f400 session 0x5603304d14a0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: handle_auth_request added challenge on 0x560330fb3c00
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:54.884981+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 778240 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:55.885183+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 778240 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:56.885363+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 778240 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:57.885543+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 778240 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:58.885717+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 778240 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:52:59.885850+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 778240 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:00.886016+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 778240 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:01.886172+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 778240 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:02.886340+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 778240 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:03.886508+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 778240 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:04.886664+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 778240 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:05.886831+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 778240 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:06.886973+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 761856 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:07.887111+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 761856 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:08.887263+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 753664 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:09.887444+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 753664 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:10.887627+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 753664 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:11.887806+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:12.887971+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:13.888090+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:14.888231+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:15.888389+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:16.888512+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:17.888699+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:18.888802+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:19.888946+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:20.889177+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:21.889343+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:22.889502+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:23.889653+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:24.889800+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:25.889942+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:26.890076+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:27.890207+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:28.890341+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:29.890467+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:30.890617+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 737280 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:31.890777+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 729088 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:32.890905+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 729088 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:33.891059+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:34.891211+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:35.891393+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:36.891554+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:37.891731+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:38.891858+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:39.892040+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:40.892205+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:41.892332+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:42.892466+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:43.892599+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:44.892724+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:45.892902+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:46.893090+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:47.893254+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:48.893390+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:49.893514+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:50.893700+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 720896 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:51.893878+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 704512 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:52.894025+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 704512 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:53.894162+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 704512 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:54.894278+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 704512 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:55.894388+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 704512 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:56.894509+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 696320 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:57.894625+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 696320 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:58.894802+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 696320 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:53:59.894922+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 696320 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:00.895064+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 696320 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:01.895215+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 696320 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:02.895350+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 696320 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:03.895491+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 696320 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:04.895610+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 696320 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:05.895892+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 696320 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:06.896020+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 696320 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:07.896148+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 696320 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:08.896279+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 696320 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:09.896428+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 696320 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:10.896597+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 696320 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:11.896735+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 679936 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:12.896870+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 679936 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:13.897030+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:14.897201+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:15.897314+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:16.897462+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:17.897605+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:18.897744+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:19.897892+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:20.898002+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:21.898148+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:22.898279+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:23.898394+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:24.898526+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:25.898618+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:26.898740+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:27.898865+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:28.899037+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:29.899207+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:30.899352+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 671744 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:31.899475+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 655360 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:32.899627+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 655360 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:33.899771+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 655360 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:34.899910+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 655360 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:35.900053+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 655360 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:36.900241+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 655360 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:37.900399+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 655360 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:38.900553+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 647168 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:39.900691+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 647168 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:40.900844+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 647168 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:41.900986+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 638976 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:42.901123+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 638976 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:43.901236+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 630784 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:44.901375+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 630784 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:45.901482+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 630784 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:46.901592+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 630784 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:47.901746+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 630784 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:48.901924+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 630784 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:49.902060+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 630784 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:50.902235+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 630784 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:51.902388+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 614400 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:52.902510+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 614400 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:53.902644+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 614400 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:54.902801+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 614400 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:55.902921+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 614400 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:56.903059+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 614400 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:57.903211+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 614400 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:58.903370+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 614400 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:54:59.903521+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 614400 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:00.903670+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 614400 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:01.903801+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 606208 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:02.903912+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 606208 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:03.904055+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 606208 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:04.904164+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 606208 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:05.904275+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 581632 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:06.904397+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 581632 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:07.904526+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 581632 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:08.904668+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 581632 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:09.904796+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 581632 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:10.904957+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 581632 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:11.905072+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 565248 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:12.905185+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 565248 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:13.905319+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 565248 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:14.905474+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 565248 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:15.905656+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 557056 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:16.905817+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 557056 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:17.905942+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 557056 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:18.906096+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 557056 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:19.906229+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 557056 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:20.906399+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 557056 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:21.906529+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 557056 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:22.906643+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 557056 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:23.906804+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 557056 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:24.906918+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 557056 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:25.907060+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 557056 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:26.907194+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 548864 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:27.907343+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 548864 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:28.907488+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 548864 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:29.907628+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 548864 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:30.907814+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 548864 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:31.907977+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 532480 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:32.908125+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 532480 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:33.908267+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 540672 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:34.908395+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 540672 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:35.908511+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 540672 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:36.908632+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 540672 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:37.908784+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 540672 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:38.908932+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 532480 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:39.909110+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 532480 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:40.909281+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 532480 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:41.909419+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 532480 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:42.909558+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 532480 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:43.909687+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 532480 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:44.909816+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 532480 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:45.909956+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 532480 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:46.910089+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 532480 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:47.910228+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 532480 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:48.910352+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 532480 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:49.910503+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 532480 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:50.910696+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 532480 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:51.910841+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:52.910999+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:53.911110+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:54.911258+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:55.911410+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:56.911517+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:57.911673+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:58.911792+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:55:59.911934+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:00.912119+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:01.912272+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:02.912406+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:03.912546+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:04.912662+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:05.912791+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:06.912933+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:07.913044+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:08.913147+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:09.913263+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:10.913390+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 516096 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:11.913512+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:12.913644+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:13.913751+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:14.913861+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:15.914001+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:16.914117+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:17.914260+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:18.914400+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 475136 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:19.914529+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 475136 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:20.914690+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 475136 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:21.914824+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 475136 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:22.914992+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 475136 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:23.915131+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 475136 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:24.915251+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 475136 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:25.915381+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 475136 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:26.915526+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:27.915673+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:28.915817+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:29.915997+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:30.916162+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:31.916515+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:32.916622+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:33.916779+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:34.916928+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:35.917069+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:36.917188+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:37.917338+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:38.917461+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:39.917608+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:40.917813+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:41.917987+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:42.918135+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:43.918274+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:44.918444+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:45.918565+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:46.918719+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:47.918870+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 491520 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:48.919030+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:49.919167+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:50.919326+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:51.919461+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:52.919597+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:53.919729+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:54.919875+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:55.919999+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:56.920183+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:57.920340+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:58.920486+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:56:59.920706+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 483328 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:00.920909+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 475136 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:01.921098+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 475136 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:02.921298+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 475136 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:03.921561+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 475136 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:04.921802+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 475136 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:05.921984+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 458752 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:06.922161+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 458752 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:07.922317+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:08.922453+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:09.922602+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:10.922741+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:11.922910+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:12.923274+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:13.923383+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:14.923827+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:15.923997+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:16.924123+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:17.924279+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:18.924393+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:19.924520+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:20.924646+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:21.924750+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:22.924873+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:23.925009+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:24.925126+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:25.925263+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 450560 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:26.925366+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 434176 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:27.925501+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 434176 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:28.925595+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 434176 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:29.925695+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 434176 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:30.925793+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 434176 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:31.925896+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 434176 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:32.926016+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 434176 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:33.926110+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 434176 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:34.926204+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 434176 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:35.926298+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 434176 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:36.926395+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 434176 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:37.926488+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 434176 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:38.926581+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 12:58:13 compute-0 ceph-osd[88362]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 434176 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: bluestore.MempoolThread(0x56032f7d5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841753 data_alloc: 218103808 data_used: 172032
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:39.926669+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 253952 heap: 72212480 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcac2000/0x0/0x4ffc00000, data 0xad902/0x15c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:40.927497+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: do_command 'config diff' '{prefix=config diff}'
Nov 26 12:58:13 compute-0 ceph-osd[88362]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 26 12:58:13 compute-0 ceph-osd[88362]: do_command 'config show' '{prefix=config show}'
Nov 26 12:58:13 compute-0 ceph-osd[88362]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 26 12:58:13 compute-0 ceph-osd[88362]: do_command 'counter dump' '{prefix=counter dump}'
Nov 26 12:58:13 compute-0 ceph-osd[88362]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 26 12:58:13 compute-0 ceph-osd[88362]: do_command 'counter schema' '{prefix=counter schema}'
Nov 26 12:58:13 compute-0 ceph-osd[88362]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 1843200 heap: 74309632 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:41.927590+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 1712128 heap: 74309632 old mem: 2845415832 new mem: 2845415832
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: tick
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_tickets
Nov 26 12:58:13 compute-0 ceph-osd[88362]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T12:57:42.927690+0000)
Nov 26 12:58:13 compute-0 ceph-osd[88362]: do_command 'log dump' '{prefix=log dump}'
Nov 26 12:58:13 compute-0 ceph-mon[74966]: from='client.14531 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:13 compute-0 ceph-mon[74966]: from='client.14535 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 12:58:13 compute-0 ceph-mon[74966]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:13 compute-0 ceph-mon[74966]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 26 12:58:13 compute-0 ceph-mon[74966]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 26 12:58:13 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/4081113466' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 26 12:58:13 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 26 12:58:13 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1626771315' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 26 12:58:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 26 12:58:14 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3653471926' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 26 12:58:14 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 26 12:58:14 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1374549756' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 26 12:58:14 compute-0 ceph-mon[74966]: from='client.14547 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:14 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1626771315' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 26 12:58:14 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3653471926' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 26 12:58:14 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1374549756' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 26 12:58:14 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 26 12:58:14 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315219921' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 26 12:58:14 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14557 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:15 compute-0 systemd[1]: Starting Hostname Service...
Nov 26 12:58:15 compute-0 systemd[1]: Started Hostname Service.
Nov 26 12:58:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 26 12:58:15 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/880344487' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 26 12:58:15 compute-0 ceph-mon[74966]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:15 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/1315219921' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 26 12:58:15 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/880344487' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 26 12:58:15 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 26 12:58:15 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4200196509' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 26 12:58:15 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14563 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 12:58:16 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 26 12:58:16 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/368214642' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 26 12:58:16 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:16 compute-0 ceph-mon[74966]: from='client.14557 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:16 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/4200196509' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 26 12:58:16 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/368214642' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 26 12:58:16 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14567 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:16 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 12:58:16 compute-0 ceph-osd[88362]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5699 writes, 23K keys, 5699 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5699 writes, 941 syncs, 6.06 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56032f6f71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 12:58:16 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14569 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:17 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 26 12:58:17 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3092950181' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 26 12:58:17 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 26 12:58:17 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4009425467' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 26 12:58:17 compute-0 ceph-mon[74966]: from='client.14563 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:17 compute-0 ceph-mon[74966]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:17 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/3092950181' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 26 12:58:17 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/4009425467' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14575 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14577 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 12:58:17 compute-0 ceph-mgr[75236]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 12:58:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 26 12:58:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2861520635' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 26 12:58:18 compute-0 ceph-mgr[75236]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 12:58:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 26 12:58:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2552200460' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 26 12:58:18 compute-0 ceph-mon[74966]: from='client.14567 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:18 compute-0 ceph-mon[74966]: from='client.14569 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:18 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2861520635' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 26 12:58:18 compute-0 ceph-mon[74966]: from='client.? 192.168.122.100:0/2552200460' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 26 12:58:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 12:58:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3566899290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 12:58:18 compute-0 ceph-mon[74966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 12:58:18 compute-0 ceph-mon[74966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3566899290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 12:58:18 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14583 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 12:58:19 compute-0 ceph-mgr[75236]: log_channel(audit) log [DBG] : from='client.14589 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
